How to better organize a class with a lot of fields? - oop

I am currently implementing something similar to an hospital intra site, where doctors can see info about their patients.
Currently, I have a LOT of info regarding each Client: his full name, date of birth, blood type, where he lives, diseases he had, etc.
My first attempt was something of the form:
class Client {
private string fullName;
private Date dateOfBirth;
...
public Get/Set FullName()
public Get/Set DateOfBirth()
...
}
which is basically putting everything together under the same class.
After a while I decided that maybe I should pack together similar concepts into a more general one. For example, I can encapsulate both userName and password into the same concept -- LoginInfo, for example.
If doing this, should I provide all the getters/setters on the Client class that delegate the work to the correct inner concepts, or should I just put getters for the concepts themselves? The first approach would shield the outside world to the Client class implementation, but then maybe, we wouldn't win that much by having all these innner concepts.
Should code outside the Client class even know the different kinds of concepts that'd use inside it?
Any other idea / approach?
I still don't know much about what methods I'll need to have on the Client class. Maybe if there are a lot, it'd be definetely good idea to use small inner concepts to group similar methods in themselves, instead of having such a loose coupled big class.
The data of Client will all be persisted using a standard database, if that makes any difference.

I would say it is useful to pack related pieces of data into common classes. I would only provide delegating getters/setters in Client for very commonly used properties though (if even then - it should be a case by case decision). If a concept makes sense in the problem domain, it is fine to expose it to the outside world too. Your LoginInfo is a marginal detail in this regard, but disease history, health check results etc. etc. are prime candidates for this.
I would also recommend you check out Martin Fowler's excellent Analysis Patterns, which dedicates a chapter to health care patterns; you may probably get some useful ideas out of it.

Something to consider when deciding how to organize data: are there any requirements for tracking history of data. For example, do you need to know what the patient's address was 5 years ago (in addition to knowing their current address, of course)? If so, making that "historically-sensitive" data its own class, will likely make it easier for you down the road. Of course, some data won't be "historically-sensitive" - date of birth for example. :)
Something else to consider: what data will be shared among patients? If you maintain data about family medical history, should that data be shared among siblings? If so, then encapsulating that data in its own object will save you lots of copy/synchronization pain later.
These aren't the only considerations when analyzing your data. But they're definitely part of the puzzle.

Related

OO Analysis--Operation placement

I'm confused as where I should place the operation/function when identifying classes. The following example--taken from the lecture slides of object-oriented design using UML, patterns and Java--particularly confuses me.
In this example 3 classes are identified from the following part of use case description "The customer enters the store to buy a toy".
2 functions are also identified, one is enters() (placed in the Store class) and the other is buy() (placed in the Toy class).
Why those functions are not associated with the Customer who perform them? Is there any heuristic to help with operation placement?
Your example is extremely simple, and it's hard to say something about it without a context. Anyway, I'll try to answer your question. So, first of all: oo modeling is not about building your classes in a "natural" way. The reason is very simple: even if we wanted to model the "real world" objects, it's simply impossible. The relations between real-world (Customer, Store, Toy) objects are almost infinitely complex. Let's think about your case for a while. When a customer enters a store, there is a lot of things happening, let's try to order them:
Customer enters a store
Customer needs to interact with the "Store gateway" somehow, for example with a door. Even this interaction can be complex: store can be closed, full, an accident can happen, door can be blocked, etc
When customer finally is inside the store, maybe there's a special store policy to greet customers (or every n-th customer). We can also imagine a lot of other things.
Finally, the customer wants to buy a toy. First, she needs to find that toy, which might not be so easy (how would you model this interaction?).
When the desired toy is found, she needs to take it and add to the shopping basket.
Then, customer goes to the queue and waits for her turn.
When waiting is over, the customer interacts with the cashier (a lot of small things, like take the toy, check it's price, maybe some quick chat...)
Finally, the customer can pay for the toy (check if she have enough money, select the paying method (cash, card, nfc?), leave the queue...).
The customer leaves the store (similar to the "enters a store" interaction, plus maybe security checking).
I'm absolutely sure I forgot about something. As you can see, the simple scenario is in fact very complex in real world. That's why it's impossible to model it exactly the same way. Even if we tried, the naive 1-to-1 mapping would probably lead to the design, where almost every action is a method of the Customer class: customer.enter(), customer.leave(), customer.buy(), customer.findToy(), customer.interactWithCashier(), customer.openDoor()... and lot more. This naive mapping would be entirely bad, because every step in the "Customer enters a store" scenario is in fact a collaboration of multiple objects, each somehow connected with another. From the other hand, if we tried to implement this scenario with all interactions, we would create a system that would take years to build and would be simply impossible to deal with (every change would require insane amounts of hours).
Ok, so how to follow ood principles? Take just a part of the interaction. Do not try to model it exactly the same way as it works in the real world. Try to adjust the model to the needs of your client. Don't overload your classes with responsibility. Every class should be easy to understand, and relatively small. You can follow some of the basic principles of software modeling, such as SOLID, YAGNI. Learn about design patterns in practice (find some GOF patterns and try to implement them in your projects). Use code metrics to analyze your code (Lack of Cohesion of methods, Efferent coupling, Afferent coupling, Cyclomatic complexity) to keep your code simple.
Let's get back to your specific example. According to the rules I mentioned before, the very important part of object modeling is to place methods where they belong. So, the data and the methods should be "coherent" (see Lack of Cohesion of Methods metric). So, your classes should generally do one thing. In your example, the responsibility of the Store class could be, for example, to allow customers to buy toys. So, we could model it this way:
public class Store {
public void buyToy(Toy toy, Customer customer)
throws ToyNotAvailableException, InsufficientFundsException {
// some validation - check* methods are private
if(!checkToyIsAvailable(toy)) {
throw new ToyNotAvailableException();
}
if(!checkCustomerHasFunds(customer, toy.price())){
throw new InsufficientFundsException();
}
// if validation succeeds, we can remove the toy from store
// and charge the customer
// removeFromStore is a private method
removeFromStore(toy);
customer.charge(toy.price());
}
}
Keep in mind that this is just a simple example, created to be easy to understand and read. We should refine it many times to make it production-ready (for example handle payment method, number of items etc).

Public vs. Private?

I don't really understand why it's generally good practice to make member variables and member functions private.
Is it for the sake of preventing people from screwing with things/more of an organizational tool?
Basically, yes, it's to prevent people from screwing with things.
Encapsulation (information hiding) is the term you're looking for.
By only publishing the bare minimum of information to the outside world, you're free to change the internals as much as you want.
For example, let's say you implement your phone book as an array of entries and don't hide that fact.
Someone then comes along and writes code which searches or manipulates your array without going through your "normal" interface. That means that, when you want to start using a linked list or some other more efficient data structure, their code will break, because it's used that information.
And that's your fault for publishing that information, not theirs for using it :-)
Classic examples are the setters and getters. You might think that you could just expose the temperature variable itself in a class so that a user could just do:
Location here = new Location();
int currTemp = here.temp;
But, what if you wanted to later have it actually web-scrape information from the Bureau of Meteorology whenever you asked for the temperature. If you'd encapsulated the information in the first place, the caller would just be doing:
int currTemp = here.getTemp();
and you could change the implementation of that method as much as you want. The only thing you have to preserve is the API (function name, arguments, return type and so on).
Interestingly, it's not just in code. Certain large companies will pepper their documentation with phrases like:
This technical information is for instructional purposes only and may change in future releases.
That allows them to deliver what the customer wants (the extra information) but doesn't lock them in to supporting it for all eternity.
The main reason is that you, the library developer, have insurance that nobody will be using parts of your code that you don't want to have to maintain.
Every public piece of your code can, and inevitably will get used by your customers. If you later discover that your design was actually terrible, and that version 2.0 should be written much better, then you realise that your paying customers actually want you to preserve all existing functionality, and you're locked in to maintaining backwards compatibility at the price of making better software.
By making as much of your code as possible private, you are unreservedly declaring that this code is nobody's business and that you can and will be able to rewrite it at any time.
It's to prevent people from screwing with things - but not from a security perspective.
Instead, it's intended to allow users of your class to only care about the public sections, leaving you (the author) free to modify the implementation (private) without worrying about breaking someone else's code.
For instance, most programming languages seem to store Strings as a char[] (an array of characters). If for some reason it was discovered that a linked list of nodes (each containing a single character) performed better, the internal implementation using the array could be switched, without (theoretically) breaking any code using the String class.
It's to present a clear code contract to anyone (you, someone else) who is using your object... separate "how to use it" from "how it works". This is known as Encapsulation.
On a side note, at least on .NET (probably on other platforms as well), it's not very hard for someone who really wants access to get to private portions of an object (in .NET, using reflection).
take the typical example of a counter. the thing the bodyguard at your night club is holding in his hands to make his punch harder and to count the people entering and leaving the club.
now the thing is defined like this:
public class Counter{
private int count = 0;
public void increment()
{
count++;
}
public void decrement()
{
count--;
}
}
As you can see, there are no setters/getters for count, because we don't want users (programmers) of this class, to be able to call myCounter.setCount(100), or even worse myCounter.Count -= 10; because that's not what this thing does, it goes up one for everyone entering and down for everyone leaving.
There is a scope for a lot of debate on this.
For example ... If a lot of .Net Framework was private, then this would prevent developers from screwing things up but at the same time it prevents devs from using the funcionality.
In my personal opinion, I would give preference to making methods public. But I would suggest to make use of the Facade pattern. In simple terms, you have a class that encapsulates complex functionality. For example, in the .net framework, the WebClient is a Facade that hides the complex http request/response logic.
Also ... Keep classes simple ... and you should have few public methods. That is a better abstraction than having large classes with lots of private methods
It is useful to know how an object s 'put together' have a look at this video on YouTube
http://www.youtube.com/watch?v=RcZAkBVNYTA&list=PL3FEE93A664B3B2E7&index=11&feature=plpp_video

OOP: How do I deal with objects that have mutual relations?

Let's say there are two classes related to each other via some relations. For example, a Student maintains a list of the Classes he takes, and each Class has a list of Students taking it. Then I am afraid of letting the Student directly being able to modify its set of Classes, because each modification would have to be followed by a similar modification of a Class's list of Students, and vice versa.
One solution is to have a class whose sole purpose is to keep track of Class-Student relations, say Registrar. But then if some method in Student requires knowledge of its Class list, the Student needs to be passed the Registrar. This seems bad. It seems Student shouldn't have access to the Registrar, where it can also access other Students. I can think of a solution, creating a class that acts as a mediator between Student and Registrar, showing the Student only what it needs to know, but this seems possibly like overkill. Another solution is to remove from Student any method that needs to access its classes and put it instead in Registrar or some other class that has access to Registrar.
The reason I'm asking is that I'm working on a chess game in Java. I'm thinking about the Piece-Cell relations and the Piece-Player relations. If in the above example it wasn't OK for a Student to have access to the Registrar, is it OK here for a Piece to have access to the Board, since a Piece needs to look around anyway to decide if a move is valid?
What's the standard practice in such cases?
If relations can be changed - classes should be decoupled as much as possible, so along with each class create an interface, do not introduce tied relations between classes.
High level of separation you can achieve using intermediate services/helpers which encapsulates logic of communication between classes, so in this case you should not inject one class to an other even both are abstracted by interfaces, basically Student does not know anything about Class, and Class does not know anything about Student. I'm not sure whether such complexity is makes sense in your case but anyway you can achieve it.
Here is you may find a useful design pattern Mediator which can encapsulate interaction logic between two decoupled entities, take a look at it.
With the mediator pattern, communication between objects is
encapsulated with a mediator object. Objects no longer communicate
directly with each other, but instead communicate through the
mediator. This reduces the dependencies between communicating objects,
thereby lowering the coupling.
What I think you have found in your pretty nice example and explanation is that OO does not solve all problems well. As long as the responsibility is well shaped and sharp, everything is fine. And as long each responsibility fits in exactly one bucket (the class), it is pretty easy to design. But here you have a tradeoff:
If I define for each responsibility a separate class, I will get a bloated design that is pretty difficult to understand (and sometimes to maintain).
If I include for each separate responsibility at least one interface, I will get more classes and interfaces than I need.
If I decide that one of the two classes is responsible for the relation as well, this one object has more knowledge than usual about the other.
And if you introduce in each case a mediator or something similar, your design will be more complex than the problem.
So perhaps you should ask the questions:
What is the likelihood that the relation between the 2 objects will change?
What is the likelihood that the relation will exist between more 1 type of objects at each end?
Is that part of the system a highly visible part, so that a lot of other parts will interface it (and therefore will be dependent on it)?
Take the simplest solution that could possibly work and start with that. As long as the solution is kept simple, it is only your code (you don't design a library for others), there are chances that you can change the design later without hassle.
So in your concrete case,
the board field should have access to the whole board XOR
the figure on the field should have the responsibility of moving XOR
there should be an object type (ChessGame?) that is responsible for the overall knowledge about moving, blocking, attacking ...
I do think that all are valid, and it depends on your special "business case" which one is the most valid.

Single Responsibility Principle vs Anemic Domain Model anti-pattern

I'm in a project that takes the Single Responsibility Principle pretty seriously. We have a lot of small classes and things are quite simple. However, we have an anemic domain model - there is no behaviour in any of our model classes, they are just property bags. This isn't a complaint about our design - it actually seems to work quite well
During design reviews, SRP is brought out whenever new behaviour is added to the system, and so new behaviour typically ends up in a new class. This keeps things very easily unit testable, but I am perplexed sometimes because it feels like pulling behaviour out of the place where it's relevant.
I'm trying to improve my understanding of how to apply SRP properly. It seems to me that SRP is in opposition to adding business modelling behaviour that shares the same context to one object, because the object inevitably ends up either doing more than one related thing, or doing one thing but knowing multiple business rules that change the shape of its outputs.
If that is so, then it feels like the end result is an Anemic Domain Model, which is certainly the case in our project. Yet the Anemic Domain Model is an anti-pattern.
Can these two ideas coexist?
EDIT: A couple of context related links:
SRP - http://www.objectmentor.com/resources/articles/srp.pdf
Anemic Domain Model - http://martinfowler.com/bliki/AnemicDomainModel.html
I'm not the kind of developer who just likes to find a prophet and follow what they say as gospel. So I don't provide links to these as a way of stating "these are the rules", just as a source of definition of the two concepts.
Rich Domain Model (RDM) and Single Responsibility Principle (SRP) are not necessarily at odds. RDM is more at odds with a very specialised subclassof SRP - the model advocating "data beans + all business logic in controller classes" (DBABLICC).
If you read Martin's SRP chapter, you'll see his modem example is entirely in the domain layer, but abstracting the DataChannel and Connection concepts as separate classes. He keeps the Modem itself as a wrapper, since that is useful abstraction for client code. It's much more about proper (re)factoring than mere layering. Cohesion and coupling are still the base principles of design.
Finally, three issues:
As Martin notes himself, it's not always easy to see the different 'reasons for change'. The very concepts of YAGNI, Agile, etc. discourage the anticipation of future reasons for change, so we shouldn't invent ones where they aren't immediately obvious. I see 'premature, anticipated reasons for change' as a real risk in applying SRP and should be managed by the developer.
Further to the previous, even correct (but unnecessary anal) application of SRP may result in unwanted complexity. Always think about the next poor sod who has to maintain your class: will the diligent abstraction of trivial behaviour into its own interfaces, base classes and one-line implementations really aid his understanding of what should simply have been a single class?
Software design is often about getting the best compromise between competing forces. For example, a layered architecture is mostly a good application of SRP, but what about the fact that, for example, the change of a property of a business class from, say, a boolean to an enum has a ripple effect across all the layers - from db through domain, facades, web service, to GUI? Does this point to bad design? Not necessarily: it points to the fact that your design favours one aspect of change to another.
I'd have to say "yes", but you have to do your SRP properly. If the same operation applies to only one class, it belongs in that class, wouldn't you say? How about if the same operation applies to multiple classes? In that case, if you want to follow the OO model of combining data and behavior, you'd put the operation into a base class, no?
I suspect that from your description, you're ending up with classes which are basically bags of operations, so you've essentially recreated the C-style of coding: structs and modules.
From the linked SRP paper:
"The SRP is one of the simplest of the principle, and one of the hardest to get right."
The quote from the SRP paper is very correct; SRP is hard to get right. This one and OCP are the two elements of SOLID that simply must be relaxed to at least some degree in order to actually get a project done. Overzealous application of either will very quickly produce ravioli code.
SRP can indeed be taken to ridiculous lengths, if the "reasons for change" are too specific. Even a POCO/POJO "data bag" can be thought of as violating SRP, if you consider the type of a field changing as a "change". You'd think common sense would tell you that a field's type changing is a necessary allowance for "change", but I've seen domain layers with wrappers for built-in value types; a hell that makes ADM look like Utopia.
It's often good to ground yourself with some realistic goal, based on readability or a desired cohesion level. When you say, "I want this class to do one thing", it should have no more or less than what is necessary to do it. You can maintain at least procedural cohesion with this basic philosophy. "I want this class to maintain all the data for an invoice" will generally allow SOME business logic, even summing subtotals or calculating sales tax, based on the object's responsibility to know how to give you an accurate, internally-consistent value for any field it contains.
I personally do not have a big problem with a "lightweight" domain. Just having the one role of being the "data expert" makes the domain object the keeper of every field/property pertinent to the class, as well as all calculated field logic, any explicit/implicit data type conversions, and possibly the simpler validation rules (i.e. required fields, value limits, things that would break the instance internally if allowed). If a calculation algorithm, perhaps for a weighted or rolling average, is likely to change, encapsulate the algorithm and refer to it in the calculated field (that's just good OCP/PV).
I don't consider such a domain object to be "anemic". My perception of that term is a "data bag", a collection of fields that has no concept whatsoever of the outside world or even the relation between its fields other than that it contains them. I've seen that too, and it's not fun tracking down inconsistencies in object state that the object never knew was a problem. Overzealous SRP will lead to this by stating that a data object is not responsible for any business logic, but common sense would generally intervene first and say that the object, as the data expert, must be responsible for maintaining a consistent internal state.
Again, personal opinion, I prefer the Repository pattern to Active Record. One object, with one responsibility, and very little if anything else in the system above that layer has to know anything about how it works. Active Record requires the domain layer to know at least some specific details about the persistence method or framework (whether that be the names of stored procedures used to read/write each class, framework-specific object references, or attributes decorating the fields with ORM information), and thus injects a second reason to change into every domain class by default.
My $0.02.
I've found following the solid principles did in fact lead me away from DDD's rich domain model, in the end, I found I didn't care. More to the point, I found that the logical concept of a domain model, and a class in whatever language weren't mapped 1:1, unless we were talking about a facade of some sort.
I wouldn't say this is exactly a c-style of programming where you have structs and modules, but rather you'll probably end up with something more functional, I realise the styles are similar, but the details make a big difference. I found my class instances end up behaving like higher order functions, partial functions application, lazily evaluated functions, or some combination of the above. It's somewhat ineffable for me, but that's the feeling I get from writing code following TDD + SOLID, it ended up behaving like a hybrid OO/Functional style.
As for inheritance being a bad word, i think that's more due to the fact that the inheritance isn't sufficiently fine grained enough in languages like Java/C#. In other languages, it's less of an issue, and more useful.
I like the definition of SRP as:
"A class has only one business reason to change"
So, as long as behaviours can be grouped into single "business reasons" then there is no reason for them not to co-exist in the same class. Of course, what defines a "business reason" is open to debate (and should be debated by all stakeholders).
Before I get into my rant, here's my opinion in a nutshell: somewhere everything has got to come together... and then a river runs through it.
I am haunted by coding.
=======
Anemic data model and me... well, we pal around a lot. Maybe it's just the nature of small to medium sized applications with very little business logic built into them. Maybe I am just a bit 'tarded.
However, here's my 2 cents:
Couldn't you just factor out the code in the entities and tie it up to an interface?
public class Object1
{
public string Property1 { get; set; }
public string Property2 { get; set; }
private IAction1 action1;
public Object1(IAction1 action1)
{
this.action1 = action1;
}
public void DoAction1()
{
action1.Do(Property1);
}
}
public interface IAction1
{
void Do(string input1);
}
Does this somehow violate the principles of SRP?
Furthermore, isn't having a bunch of classes sitting around not tied to each other by anything but the consuming code actually a larger violation of SRP, but pushed up a layer?
Imagine the guy writing the client code sitting there trying to figure out how to do something related to Object1. If he has to work with your model he will be working with Object1, the data bag, and a bunch of 'services' each with a single responsibility. It'll be his job to make sure all those things interact properly. So now his code becomes a transaction script, and that script will itself contain every responsibility necessary to properly complete that particular transaction (or unit of work).
Furthermore, you could say, "no brah, all he needs to do is access the service layer. It's like Object1Service.DoActionX(Object1). Piece of cake." Well then, where's the logic now? All in that one method? Your still just pushing code around, and no matter what, you'll end up with data and the logic being separated.
So in this scenario, why not expose to the client code that particular Object1Service and have it's DoActionX() basically just be another hook for your domain model? By this I mean:
public class Object1Service
{
private Object1Repository repository;
public Object1Service(Object1Repository repository)
{
this.repository = repository;
}
// Tie in your Unit of Work Aspect'ing stuff or whatever if need be
public void DoAction1(Object1DTO object1DTO)
{
Object1 object1 = repository.GetById(object1DTO.Id);
object1.DoAction1();
repository.Save(object1);
}
}
You still have factored out the actual code for Action1 from Object1 but for all intensive purposes, have a non-anemic Object1.
Say you need Action1 to represent 2 (or more) different operations that you would like to make atomic and separated into their own classes. Just create an interface for each atomic operation and hook it up inside of DoAction1.
That's how I might approach this situation. But then again, I don't really know what SRP is all about.
Convert your plain domain objects to ActiveRecord pattern with a common base class to all domain objects. Put common behaviour in the base class and override the behaviour in derived classes wherever necessary or define the new behaviour wherever required.

Is there a best way to handle naming fads?

In the last year and a bit of working on my team's code base I have noticed a steady progression of naming conventions.
For example, there are a lot of classes that are named to express that they are a class that helps you do something.
Here's the ones I've spotted:
MyClassUtil
MyClassFactory
MyClassHelper
MyClassManager
MyClassService
It just seems to me that over time people come up with naming conventions for relatively the same thing and so instead of having everything named in a consistent manner you wind up with a code base that has a bit of every convention. All the new stuff is named based on the latest fad naming convention and so you can pretty much tell the age of a bit of code by what convention was in fashion at the time.
What is the best way to deal with this tendency? Is it really a problem? As these naming fads come into vogue, should one use the latest fad? Should one rename all existing items with the new naming convention? Or should one just accept the variety as something that is inescapable?
They don't seem like fads... all these names hint at the purpose of the class, and those purposes are different. With programming, it's all in the name, and they should be chosen very carefully. The variety doesn't need to be escaped. The names vary because the purposes of the classes vary.
MyClassUtil
-Some utilities for working with MyClass that it didn't come with. Maybe MyClass belongs to a library you're using, but you often use some higher level functions with it and you need somewhere to put them.
MyClassFactory
-Creates instances of MyClass in an abstracted way. This allows you to write code that needs MyClass instances. It can get those new instances from a MyClassFactory. This would allow the Factory to modified in future to serve up different specific implementations of MyClass. Maybe under unit testing, the Factory just serves up dummy/mock MyClasses. This means a class that uses the factory can be tested without needing to change it, just change the factory, and voilĂ  you can isolate the class being tested.
MyClassHelper
-Ok, I may agree, perhaps this can be more specific. It does something to help with MyClass, but what. Maybe this is a bit similar to MyClassUtil. But, probably MyClassUtil is general functions that work with MyClass, whereas the helper is initialized with a specific instance of MyClass and then can do operations on that one instance. You need a new helper for each MyClass you want to help.
MyClassManager
-Maybe this deals with a pool of MyClass instances and stores or orchestrates them. Eg. in a CommunicationsManager, the class would handle wiring together classes that handle talking to a port or connection like ethernet or serial, and a class that deals with the comms protocol being sent over it so it can transport packets, and a class that deals with the messages in those packets.
MyClassService
-A service can do things for you, like given a postcode convert it into a grid-reference. Usually a service can resolve to many specific things. With the postcode example, this class might be have implementations that can talk to different web sites to do the conversion.
All of the names of classes you've given above indicate to me a striking departure from object-oriented principles. There's no way of telling what "MyClassUtil" or "MyClassService" does. It could be anything. Class naming should be specific, and should relay clearly the actual function of the class. None of these do. The best way to deal with this tendency is to brush up on object oriented programming skills and name the classes accordingly.
Now, it could be that these examples point out the function, within the application architecture, that these classes represent, and your use of "MyClass" is simply a placeholder for something more definitive at runtime, in which case, I wouldn't view these as naming fads, but rather as descriptive indicators of the function of the class itself, with a loose hint of the application's underlying architecture.
If this is pervasive, the team needs to spend some time studying OO design: reading the source code to well-respected OO frameworks, books on design patterns or books such as Evans "Domain Driven Design".
"Util" and "Manager" are often symptoms of poor design - "code smells". So is "Helper" outside of special contexts (Rails apps) where it's well entrenched.
"Factory" and "Service" have precise technical meanings, you can check the code to see if it conforms to those design patterns.
The general remedy is to sit down with the team, and have an explicit discussion about what benefits you're expecting from these naming schemes, what makes sense and what doesn't, and then over the next few months apply refactoring techniques to phase out the names you've all decided are code smells.
Naming is important. It shouldn't be taken lightly, nor is it a subjective matter. True, there is often more than one correct answer to a given naming issue. However, there are seldom many answers consistent with previous choices, which is key.
Renaming the names to better ones and refactoring the code so that each class has a clear responsibility, is recommended. To know what kind of names to use, read Tim Ottinger's article about Meaningful Names.
When a class does only one thing, then giving it a descriptive name is usually easy. Words such as "manager" are vague and may indicate that the class is responsible for doing so many unrelated things, that no simple name is able to describe what the class does. If you can know what the class does just by looking at the name of the class, then the class has a good name.
I don't really see how Factory or Service fit in to a particular fad...
Factory is a design pattern and if the class really is a factory then it's a perfectly appropriate name.
If a class is a Windows service what's wrong with calling it service?
There isn't a problem unless you find that performing all the rename refactors is too costly even though you really want to do them.
Why not use a static analysis tool to help enforce a set of style and consistency rule?
If you're in the .NET world Microsoft provides a tool called StyleCop
In the classname examples you give does "MyClass" stand for an actual class name, so that you are really seeing names like "PersonnelRecordUtil" or "GraphNodeFactory"? MyClassFactory is a really bad actual name for a class.