History of access control modifiers such as public/private/protected - oop

How did these keywords and concepts come to life? What were the forces and problems that made them appear? What was the first language to have them?
Actually, it's not just about public/private/protected, but rather the whole range of keywords that enforce some rules (abstract, final, internal).
But, please, do not assume things. Answer if you know at least part of the answer or answer if you lived those moments. References are greatly appreciated.

Simula (1967), considered to be the first OO language, has modifiers called protected and hidden. I assume that public is the default, I can't remember. It also uses virtual.
And, with thanks to Pavel, Simula introduced the most important keywords (and concepts) of class, this, new, downcasting and reference types.
Smalltalk (1980), a later but much more fundamental OO language, gave us Methods responding to Messages. This basically is the same functionality as virtual functions. Messages and Classes were later imitated in C (non-OO) to give the Windows API polymorphic behavior. But still needing ugly switch-statements and function pointers to replace inheritance.
The first use of Properties was, as far as I know, in Delphi (Object Pascal, < 1994).

public, private and protected access modifiers come from C++. It seems that public and private already existed in "C with classes", short lived precursor of C++. This is probably detailed in The design and Evolution of C++.
I think abstract and final come from Java and internal from C#.

this sort of thing starts out with multiple language designers asking 'what's a simple, logical name for this concept'? then, over time certain names become popular (sometimes because they're good names, sometimes just because). add 20 years, and most people end up picking the same names, based on what they've seen.
a similar question, perhaps, to asking how new words get added to (say) the English language.

For C++, the origins of private and public protection come from before Stroustrup's experiments with C With Classes, but from an even older system - the Cambridge CAP computer. This is described in section 2.10 of "The Design & Evolution of C++".
As for protected, that has had a murkier past & I don't have a good reference for it.

Related

Objective-C class naming convention vs Uncle Bob

In Chapter 2: Meaningful Names Uncle Bob writes:
Don't Add Gratuitous Context
In an imaginary application called "Gas Station Deluxe," it is bad idea to prefix every class with GDS. Frankly, you are working against your tools. You type G and the press completion key and are rewarded with a mile-long list of every class in your system
Actually that what I discovered during my first days with Objective-C a bit more than one year ago. After Java it was quite disappointing but I thought I'm only one who annoyed about that :)
I understand, that "Clean Code" book refers to Java most of the time and Java has namespaces (packages) unlike Objective-C.
Do you use 2-3 letters prefix in your classes if you're building an app, not a library?
What do you think, is it bad language design, language "feature" or Uncle Bob wasn't right here?
Perhaps the key word here is gratuitous. In Objective-C, prefixes serve the important purpose of reducing the chance of name collisions. In other languages like Java and C++, the existence of support for namespaces makes the use of prefixes gratuitous (and a violation of the oft-cited DRY principle). In Objective-C, however, prefixes are meaningful, useful, and not gratuitous.
I was tempted to close this question, but I don't think I've seen a similar one asked before and it's a valid question. Here are my rather disorganized thoughts on the matter.
Many languages have a feature called namespaces, where the "fully qualified" class name is prefixed by a hierarchical series of names. For example, the String class in Java is, properly, java.lang.String, and a custom class is properly com.whatever.foobar.MyClass.
Unfortunately, namespaces have never been added to Objective-C, which means that Objective-C symbols (class names, protocol names, and a few various other types) cannot be placed in a namespace even when using Objective-C++ (which has a namespace feature for functions, constants, structures, etc.)
The only solution to prevent symbol collisions in shared code, then, is to use some form of name mangling to make your symbol names unique. In Objective-C, the convention is to use a prefix of two characters (sometimes the number varies) for all your classes.
This Uncle Bob fellow is a twit for telling you not to do this, because while you'll end up with a program that doesn't compile, you'll lose any benefit of namespaces that prefixes still offer. Does your app use plugins? You need to prefix. Does your app have a public API? You need to prefix.
In theory, code within a single application that never touches the outside world can do without prefixes, but screw it--keep coding cleanly, and add a prefix even there. It'll save you grief later.
Personally I almost never use prefixes. The only exceptions are classes that are somehow connected to each other or they all should be present.
An example:
Some client app for chat. Let's call that chat an ExampleChat.
Then I'd use ECMessage, ECUser, ECRoom, etc. to easily see which classes should there be.
Or if I make some custom cells for UITableView I'd use prefixes to keep them all close to each other and not struggle with searching them in a "mile-long list". Example:
ECTextMessageCell, ECSoundMessageCell, ECUploadMessageCell, ECJoinOrLeaveMessageCell, etc.
That's my personal opinion, which can not be the best. But it's still easiest for me.
Hope it helps
Well if you do not have Namespaces, name conflicts are likely to occur. You can see that in a lot of C libraries that they are using some kind of prefix. So I guess there are good reasons to have those prefixes and other reasons not to use it. But what should be the big problem to modify the completion to either just ignore the prefix of typing three letters instead of just one.
So in the end it seems to me a matter of taste. I guess it would be more important to have good structures classes with prefixes instead of a mess of classes without prefix....
It has nothing to do with bad language design IMHO. There was a time where software was not everywhere and why should one waste extra effort on namespaces? And still as we can see even nowadays languages without namespaces are used.....
I would say, that the world is not black or white. I do programming in java, with packages and yes, it is annoying to have a prefix in each class, as well as it is annoying and arguable to start interfaces with I (just like .Net used to do it).
Sometimes it does annoying me in objective-c however it has some legitimacy if you do not have packages in your language, since you can 'build' artificial groups of classes like 'NS', 'UI', 'MK' and so on in objc and cocoa.
Beyond avoiding collisions, one of the benefits that name prefixes gives is that you're immediately aware of what type you're really dealing with. Suppose you had the following code:
Color c = ...;
MultiValueMap m = ...;
From a cursory glance at the code and depending on what libraries you've used, those types could be from a number of different sources. You may have to lookup which include/import statement was made to understand what the type can do (e.g. you want to modify it but it's missing a method that you're sure is there).
In the iOS world, you would immediately know whether it's a UIColor vs. a CGColor and gain immediate context.
In the past at WWDC, Apple would host a session where they explained Cocoa/Objective-C coding conventions. I believe they mention this aspect of name prefixes so you might want to find one of the recordings that are made available. Other C developers (e.g. Linux kernel developers) also do not seem to think highly of C++ namespaces (among other C++ features) for various reasons.

Is it bad form to have a a MiscUtilities class? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Our company keeps a MiscUtilities class that consists solely of public static methods that do often unrelated tasks like converting dates from String to Calendar and writing ArrayLists to files. We refer to it in other classes and find it pretty convenient. However, I've seen that sort of Utilities class derided on TheDailyWTF. I'm just wondering if there's any actual downside to this sort of class, and what the alternatives are.
Rather than giving personal opinion, I will quote from an authoritative source in the Java community, and examples from 2 very reputable third party libraries.
A quote from Effective Java 2nd Edition, Item 4: Enforce noninstantiability with a private constructor:
Occasionally you'll want to write a class that is just a grouping of static methods and static fields. Such classes have acquired a bad reputation because some people abuse them to avoid thinking in terms of objects, but they do have valid uses. They can be used to group related methods on primitive values or arrays, in the manner of java.lang.Math or java.util.Arrays. They can also be used to group static methods, including factory methods, for objects that implements a particular interface, in the manner of java.util.Collections. Lastly, they can be used to group methods on a final class, instead of extending the class.
Java libraries has many examples of such utility classes.
Apache Commons Lang follows the TypeUtils naming convention
ArrayUtils, StringUtils, ObjectUtils, BooleanUtils, etc
Guava follows the Types naming convention
Objects, Strings, Throwables, Collections2, Iterators, Iterables, Lists, Maps, etc.
The package summary actually has a specific section on classes of static utility methods
Another entire package consists of nothing but utility classes for working with Java primitives, Ints, Floats, Booleans, etc.
Short summary
Prefer good OOP design, always
static utility classes have valid uses to group related methods on:
Primitives (since they're not objects)
Interfaces (since they can't have anything concrete of their own)
final classes (since they're not extensible)
Prefer good organization, always
Group utility methods for SomeType to SomeTypeUtils or SomeTypes
Avoid a single big utility class that contains various unrelated tasks on different types/concepts
Convenient, most likely.
Possible to grow into a scary, hard to maintain swiss-army-rocket-chainsaw-and-floor-polisher, also most likely.
I'd recommend separating the various tasks into separate classes, with some logical grouping besides "won't fit anywhere else".
The risk here is that the class becomes a tangled mess nobody fully comprehends and noone dares to touch - or replace. If you feel that is an acceptable risk and/or avoidable under your circumstances, nothing really prevents you from using it.
I've never been a fan of the MiscUtilities class. My biggest issue is that I never know what is in it. Anything filed under miscellaneous is not discoverable. Instead I prefer to use a common dll that I can import into my projects that contains well named, separated classes for different purposes. The difference is subtle, but I find that it makes my life a little easier.
For languages that support functions, this sort of class is undeniably bad form.
For languages that don't, this sort of class isn't, and is probably superior to extending other classes with random utility methods. The static utility methods, because they are in some other class, can only use the public interface of the objects they handle, which decreases the likelihood of certain kinds of bug. And this approach also avoids polluting public interfaces with a random grab bag of whatever people happened to find useful at the time.
There's a certain amount of personal style involved of course. I'm not a big believer in classes that provide everything under the sun (even C++'s std::string is a tad over-featured for my taste) and tend to prefer to have helper functionality as separate functions. Makes maintenance of the class easier, forces the public interface to be useful and efficient, and with duck-typing style mechanisms the external functions can be used across a wide range of types without having to duplicate source text or share base classes and so on. (The oft-derided algorithms in the C++ Standard Library are a good demonstration of this, imperfect and verbose as they are.)
That said, I've worked with many who complain about strings that don't know how to interpret themselves as filenames, or split themselves into words, or what have you, and so on. (I pick on strings because they seem to be the prime target for utility functions...) I happen to think there are unseen maintenance and reliability costs associated with having large classes like that, quite apart from the ugliness of having a nominally simple class that's actually a vast illogical mishmash of unrelated concerns whose grubby fingers end up poking themselves into every last corner -- because your self-tokenizing string needs some kind of container to put the tokens in, right?! -- but it's a balancing act, even if my wording suggests it's more clean-cut than that.
I'm not a big believer in the notion of "OO dogma", but perhaps the paranoid might see it at work here. There's no good reason that all functionality should be attached to a particular class, and many good reasons why it should not. But some languages still don't allow the creation of functions, which does nothing to remove the need for them and forces people to work around the restriction by creating classes that consist of nothing but static methods. This rather overloads the meaning of the class concept, to my mind, and not in any good way.
So that IS a good reason to rail against this practice, but it's pretty futile unless the language changes to accommodate what people need to do. And languages don't come without functions unless their designers have an axe to grind, or there are technical reasons for it, so I should think that change in either case is unlikely.
I suppose the executive summary is: no.
Well, bad utility classes are derided on TheDailyWTF :)
There's really nothing wrong with having a generic utilities class for miscellaneous static business functions. I mean, you could try to put it all into a more object oriented approach, but at what cost in time and effort to the business and for what trade-off of maintainability? If the latter outweighs the former, go for it.
One approach you may be able to take, depending on the language, etc., is to perhaps move some of the logic into extensions on existing objects. For example, extending the String class (thinking in C# here) with a method that tries to parse the string into a DateTime. An in-house library of extensions just enhances the language with your business' own little DSL(s).
The company I work for has a class like that in its repository. Personally I find it annoying because you have to be really intimate with the class in order to know what it's useful for. Consequently, I've found myself re-writing methods that this class already covers! Double annoying because I've now wasted my time.
I would prefer a more object oriented approach that would lead to expandability. Have a Utilities class for sure, but inside it put other classes that expand toward specific functionality. Like Utilities.XML, Utilities.DataFunctions, Utilities.WhateverYouWant. That way you can expand and eventually take your 20 function MiscUtilities class and turn it into a 500 function class library.
A Class Library like this could then be used by anyone, and added to by anyone (with privileges) in a logically organized way.
I think the wrong defect of such a class is that it break Separation of concerns principle. I usually create multiple "Helpers" class to contains widely used, public static methods, for example ArrayHelpers to writing ArrayLists to files, and DatesHelper to converting dates from String to Calendar.
Moreover, if the class does contain complicated methods, it's better to try to refactor them using more object-oriented tecnique.
You can always switch from your "Helpers" class to the use of various OO pattern, leaving your old static methods to function as a Facade.
Yuo'll find great benefits everytime you'll be able to do so.
I keep a separate misc class for each project, and copy/paste code from other projects as needed. Perhaps not the best approach, but I prefer to avoid cross-project dependencies.
Examples of things in my helper class:
hex2, hex4, and hex8 (accept integer parameters, except hex8 which has integer and uinteger variations; all versions ignore higher-order bits)
byt (convert 8 lsb's of argument into a byte)
getSI, getUI, getSL, getUL (each takes a byte array and an offset, and returns the little-endian signed word, unsigned word, signed 32-bit word, or unsigned 32-bit word at that offset
putSI, putUI, putSL, putUL (takes a byte array, offset, and a value to put there in little-endian format)
hexArr (converts a byte array or portion thereof into a hex string)
hexToArr (converts a hex string to a byte array)
Zap(of T as iDisposable) (takes a byref iDisposable; if not Nothing, disposes it and sets it to Nothing)
Many of those are only useful when fiddling with binary data, but none of them is really domain-specific. Maybe the first six could go in a BinaryHelpers module, but I'm not sure where Zap should go other than in a misc utilities class.
Utility classes aren't bad, in and of themselves. They can be (mis|ab|over)used at times, but they do have their place. If you have utility methods for types you own, consider moving the static methods to the appropriate types. Or creating extension methods.
Do try to avoid a monolithic utilities class - they may be static methods, but they will have poor cohesion. Break up a large set of unrelated functions into smaller groupings of related functionality, much like you would your "normal" classes. Name them *Helper or *Utils, or whatever your preference is. But be consistent, and group them together, perhaps in a folder within a project.
When utility classes are broken up as described, you can create methods for working with specific types - primitives or classes, such as arrays, strings, dates and times, and so on. Admittedly, these wouldn't belong anywhere else, so a utility class is the place to go.
Personally, I often find such a class handy - even if only in the short term. That said, I try not to share them between projects. I would not keep a global version, but write one specific to each project - otherwise you're incorporating dead-weight which may cause issues for security or architecture.
What I do for my personal projects is keep a misc library but rather than adding a reference in my projects, I paste the relevant bits of code in to the relevant places. It's technically duplicaintg it, but not within a single solution and thats the important thing. However I don't think this would work on a larger scale, too messy.
I generally don't have a problem with them, although, like all things, they can be abused:
They grow wildly large, so that most problems that use the class don't use 99% of the functions.
They grow wildly large, so that 90% of the functions aren't used by any program still in use.
Often they are a dumping ground for functions which are specific to one domain. They should be pared off to a similar class use just by program in that domain. Often, these function would be better off incorporated into proper classes.
I used to have, in every project, a module called MiscStuffAndJunk. It was a place to hold everything that didn't have a clear place to go, either because the functionality was a one-off, or because I didn't want to change my focus, so as to do a proper design for a function that was needed by but extraneous away from what I was currently concentrating on.
Still, it these modules are clearly in violation of OO design principles.
So nowadays, I name the module StuffIHaventRefactoredYet, and all is right with the world.
Depending on what your static utility functions actually do and return, it may be cause problems unit testing. I have come across a method in a class that calls a static function on a static class that return things I do not want in my unit test, rendering the whole method untestable...

Explanation of naming conventions for classes, methods and variables

I'm currently in University and they're pretty particular about following their standards.
They've told me this:
All classes must start with a capital
letter
Correct
public class MyClass {}
Incorrect
public class myClass {}
public class _myClass {}
All methods must start with a
lowercase letter
Correct
public void doSomething() {}
Incorrect
public void DoSomething() {}
public void _doSomething() {}
all variables must start with a
lowercase letter
Correct
string myString;
Incorrect
string MyString;
string _myString;
Yet in my last year of programming, I've been finding that people are using much different rules. It wouldn't matter if it were just a few people using the different rules, but almost everywhere I see these different practices being used.
So I just wanted to know what the reasoning behind the above standards is and why some of these other standards are being used: (are they wrong/old standards?)
Most methods I've seen start with a capital letter rather than a lowercase-- Pretty much any of Microsoft's methods I've been using from their imported namespaces. This is probably the most common one I've seen that I don't understand
A lot of people use _ for class variables.
I've seen capitals on variables ie. string MyString;
I know I've missed a few as well, if you can think of any that you could add in and give an explanation for that would be helpful. I know everyone develops their own coding styles, but many of these practices have reasons behind them and I would rather stick with what makes the most sense.
Thanks,
Matt
There is no valuable reason to choose one coding style rather than an other one.
The most important thing is to agree on a coding style with the people you are working on. And to help you to all agree on a coding style, your professor told you a coding style.
Most of the time, it is just a point of view. So, just follow your professor's coding style if you have to code with the university....
standards are arbitrary, like which side of the road to drive on; just do it like they tell you to do it ;-)
Most people are talking about naming convention style, but there are other things to consider when approaching naming conventions, such as what you actually name a routine.
Routine (methods, functions, and procedures) names should typically by in the form of a strong verb + object, regardless of how you format it. For example:
paginateResponse()
or
empty_input_buffer()
as (respectively) opposed to
dealWithResponse()
or
process_input_buffer()
Both "dealWith" and "process" are verbs, but they are ambiguous and cause any other programmers working with your code in the future to have to consult the actual routine definition to determine what it really does.
"Strong" verbs, on the other hand, as shown in the first two examples, are much more powerful in their descriptive power and really pin down what the routine is doing.
This makes your code easier to read as it is self-documenting and leads to higher levels of cohesion.
Also, as a personal point of style, I try to avoid at all costs using "my" in any name.
Standards are only standards if they are followed, and every company or institution has their own standards. It is one of the worst parts of programming. :D
Speaking specifically about the leading _. From my experience this is mostly used on variables that are declared private within a class. They are usually coupled with a method to retrieve them that has the same name without the leading _.
I am trying to follow the rules from Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries by Krzysztof Cwalina and Brad Abrams
Guidelines in this book are presented in four major forms: Do, Consider, Avoid, and Do not. These directives help focus attention on practices that should always be used, those that should generally be used, those that should rarely be used, and those that should never be used. Every guideline includes a discussion of its applicability, and most include a code example to help illuminate the dialogue.
Also, you can use FxCop to check your compliance with those rules.
Standards help with readability, and therefore improve maintainability. (because when you can read the code faster, easier and more accurately, you can debug and repair it, or enhance it, in less time and with less effort.)
They have no effect on reliability or availability, cause the computer doesn't care what the variables are named or how the souurce code is formatted.
If you code is well-organized and readable, you have achieved the objective, regardless of whether or not it conforms exactly to anyone elses "standard".
This says nothing, of course, about how to handle the environment where "standards" are high on someone's list of developer evaluation tools, or management metrics...
I see logic behind capitalisation of classes and variables; it means you can do things like
Banana banana; // Makes a new Banana called banana
I've been learning Qt recently, and they follow your conventions to the letter. I wouldn't ever follow Microsoft's naming conventions!
The standards I've seen echo what's in the Framework Design Guidelines. In the examples you've stated above, I don't see you distinguishing between visibility (public/private).
For example:
Public facing methods should be PascalCase: public void MyMethod() ...
Parameters to methods should be camelCase: public void MyMethod(string myParameter) ...
Fields which should always be private, should be camelCase. Some prefer the underscore prefix (i do) to distinguish it from method parameters.
The best bet on standards is to have your team agree upon conventions up front when the project kicks off, you'll find everything much more consistent.
Coding styles are based on personal preferences and to a large extent the features of the language that you're using.
My personal take is that it's more important to be consistent with a convention than picking the "right one". People can be dogmatic about they're preferred style and things can often delve into a religious war.
All classes must start with a capital letter - This goes hand-in-hand with variable naming and helps prevent confusion that would arise if you had both classes and variables named with the same rules. My preference is a capital letter because I'm used to it and it follows the guidelines for my preferred language (C#).
All methods must start with a lowercase letter - same goes, although I start my methods with an uppercase character (as per C# guidelines).
All variables must start with a lowercase letter - this, I believe, is dependent on you language's scoping features. Often people prefix variables (usually an underscore or a character like "g") to indicate a variable's scope ("g" might mean "global"). This can help prevent confusion where variables have the same names in different scopes. My C# driven preference: all variables have start with a lowercase letter and I use "this." to reference a global variable of the same name where scope is a problem (this usually only occurs in a class's constructor).
I can't let 3. go by without mentioning Hungarian notation (which is grossly misused and misunderstood). Joel has a great article that helped me understand these better.
In addition to the main point, that while any specific standard is essentially arbitrary but it's important to have some agreed upon standard, I'd also add that some standards are ubiquitous enough to have achieved the status of the "correct" way to do things.
For example, in java, class names in professional code are always in CamelCase. I'll qualify the always in saying that your code will compile if you break the standard, and you may occasionally find some open source projects that break the convention as well, but I believe that most people would take that as a sign that the author is not too familiar with the language. Most of your professors guidelines are fairly standard (for java, in any case). Being radically different in this case, apart from annoying your professor, will probably irritate total strangers ;)
It's interesting to me that some languages seem to have taken this standardization to heart, and enforce capitalization to have specific meaning (e.g. Haskell).
The rules you're citing are those used pretty universally in the Java world.
Are you doing Java code at university? If not, it may be that they were previously teaching Java, then switched to C# but kept the naming conventions.

Is there a best way to handle naming fads?

In the last year and a bit of working on my team's code base I have noticed a steady progression of naming conventions.
For example, there are a lot of classes that are named to express that they are a class that helps you do something.
Here's the ones I've spotted:
MyClassUtil
MyClassFactory
MyClassHelper
MyClassManager
MyClassService
It just seems to me that over time people come up with naming conventions for relatively the same thing and so instead of having everything named in a consistent manner you wind up with a code base that has a bit of every convention. All the new stuff is named based on the latest fad naming convention and so you can pretty much tell the age of a bit of code by what convention was in fashion at the time.
What is the best way to deal with this tendency? Is it really a problem? As these naming fads come into vogue, should one use the latest fad? Should one rename all existing items with the new naming convention? Or should one just accept the variety as something that is inescapable?
They don't seem like fads... all these names hint at the purpose of the class, and those purposes are different. With programming, it's all in the name, and they should be chosen very carefully. The variety doesn't need to be escaped. The names vary because the purposes of the classes vary.
MyClassUtil
-Some utilities for working with MyClass that it didn't come with. Maybe MyClass belongs to a library you're using, but you often use some higher level functions with it and you need somewhere to put them.
MyClassFactory
-Creates instances of MyClass in an abstracted way. This allows you to write code that needs MyClass instances. It can get those new instances from a MyClassFactory. This would allow the Factory to modified in future to serve up different specific implementations of MyClass. Maybe under unit testing, the Factory just serves up dummy/mock MyClasses. This means a class that uses the factory can be tested without needing to change it, just change the factory, and voilĂ  you can isolate the class being tested.
MyClassHelper
-Ok, I may agree, perhaps this can be more specific. It does something to help with MyClass, but what. Maybe this is a bit similar to MyClassUtil. But, probably MyClassUtil is general functions that work with MyClass, whereas the helper is initialized with a specific instance of MyClass and then can do operations on that one instance. You need a new helper for each MyClass you want to help.
MyClassManager
-Maybe this deals with a pool of MyClass instances and stores or orchestrates them. Eg. in a CommunicationsManager, the class would handle wiring together classes that handle talking to a port or connection like ethernet or serial, and a class that deals with the comms protocol being sent over it so it can transport packets, and a class that deals with the messages in those packets.
MyClassService
-A service can do things for you, like given a postcode convert it into a grid-reference. Usually a service can resolve to many specific things. With the postcode example, this class might be have implementations that can talk to different web sites to do the conversion.
All of the names of classes you've given above indicate to me a striking departure from object-oriented principles. There's no way of telling what "MyClassUtil" or "MyClassService" does. It could be anything. Class naming should be specific, and should relay clearly the actual function of the class. None of these do. The best way to deal with this tendency is to brush up on object oriented programming skills and name the classes accordingly.
Now, it could be that these examples point out the function, within the application architecture, that these classes represent, and your use of "MyClass" is simply a placeholder for something more definitive at runtime, in which case, I wouldn't view these as naming fads, but rather as descriptive indicators of the function of the class itself, with a loose hint of the application's underlying architecture.
If this is pervasive, the team needs to spend some time studying OO design: reading the source code to well-respected OO frameworks, books on design patterns or books such as Evans "Domain Driven Design".
"Util" and "Manager" are often symptoms of poor design - "code smells". So is "Helper" outside of special contexts (Rails apps) where it's well entrenched.
"Factory" and "Service" have precise technical meanings, you can check the code to see if it conforms to those design patterns.
The general remedy is to sit down with the team, and have an explicit discussion about what benefits you're expecting from these naming schemes, what makes sense and what doesn't, and then over the next few months apply refactoring techniques to phase out the names you've all decided are code smells.
Naming is important. It shouldn't be taken lightly, nor is it a subjective matter. True, there is often more than one correct answer to a given naming issue. However, there are seldom many answers consistent with previous choices, which is key.
Renaming the names to better ones and refactoring the code so that each class has a clear responsibility, is recommended. To know what kind of names to use, read Tim Ottinger's article about Meaningful Names.
When a class does only one thing, then giving it a descriptive name is usually easy. Words such as "manager" are vague and may indicate that the class is responsible for doing so many unrelated things, that no simple name is able to describe what the class does. If you can know what the class does just by looking at the name of the class, then the class has a good name.
I don't really see how Factory or Service fit in to a particular fad...
Factory is a design pattern and if the class really is a factory then it's a perfectly appropriate name.
If a class is a Windows service what's wrong with calling it service?
There isn't a problem unless you find that performing all the rename refactors is too costly even though you really want to do them.
Why not use a static analysis tool to help enforce a set of style and consistency rule?
If you're in the .NET world Microsoft provides a tool called StyleCop
In the classname examples you give does "MyClass" stand for an actual class name, so that you are really seeing names like "PersonnelRecordUtil" or "GraphNodeFactory"? MyClassFactory is a really bad actual name for a class.

Why the claim that C# people don't get object-oriented programming? (vs class-oriented)

This caught my attention last night.
On the latest ALT.NET Podcast Scott Bellware discusses how as opposed to Ruby, languages like C#, Java et al. are not truly object oriented rather opting for the phrase "class-oriented". They talk about this distinction in very vague terms without going into much detail or discussing the pros and cons much.
What is the real difference here and how much does it matter? What are other languages then are "object-oriented"? It sounded pretty interesting but I don't want to have to learn Ruby just to know what if anything I am missing.
Update
After reading some of the answers below it seems like people generally agree that the reference is to duck-typing. What I'm not sure I understand still though is the claim that this ultimately changes all that much. Especially if you are already doing proper TDD with loose coupling etc. Can someone show me an example of a specific thing I could do with Ruby that I cannot do with C# and that exemplifies this different OOP approach?
In an object-oriented language, objects are defined by defining objects rather than classes, although classes can provide some useful templates for specific, cookie-cutter definitions of a given abstraction. In a class-oriented language, like C# for example, objects must be defined by classes, and these templates are usually canned and packaged and made immutable before runtime. This arbitrary constraint that objects must be defined before runtime and that the definitions of objects are immutable is not an object-oriented concept; it's class oriented.
The duck typing comments here are more attributing to the fact that Ruby and Python are more dynamic than C#. It doesn't really have anything to do with it's OO Nature.
What (I think) Bellware meant by that is that in Ruby, everything is an object. Even a class. A class definition is an instance of an object. As such, you can add/change/remove behavior to it at runtime.
Another good example is that NULL is an object as well. In ruby, everything is LITERALLY an object. Having such deep OO in it's entire being allows for some fun meta-programming techniques such as method_missing.
IMO, it's really overly defining "object-oriented", but what they are referring to is that Ruby, unlike C#, C++, Java, et al, does not make use of defining a class -- you really only ever work directly with objects. Conversely, in C# for example, you define classes that you then must instantiate into object by way of the new keyword. The key point being you must declare a class in C# or describe it. Additionally, in Ruby, everything -- even numbers, for example -- is an object. In contrast, C# still retains the concept of an object type and a value type. This in fact, I think illustrates the point they make about C# and other similar languages -- object type and value type imply a type system, meaning you have an entire system of describing types as opposed to just working with objects.
Conceptually, I think OO design is what provides the abstraction for use to deal complexity in software systems these days. The language is a tool use to implement an OO design -- some make it more natural than others. I would still argue that from a more common and broader definition, C# and the others are still object-oriented languages.
There are three pillars of OOP
Encapsulation
Inheritance
Polymorphism
If a language can do those three things it is a OOP language.
I am pretty sure the argument of language X does OOP better than language A will go on forever.
OO is sometimes defined as message oriented. The idea is that a method call (or property access) is really a message sent to another object. How the recieveing object handles the message is completely encapsulated. Often the message corresponds to a method which is then executed, but that is just an implementation detail. You can for example create a catch-all handler which is executed regardless of the method name in the message.
Static OO like in C# does not have this kind of encapsulation. A massage has to correspond to an existing method or property, otherwise the compiler will complain. Dynamic languages like Smalltalk, Ruby or Python does however support "message-based" OO.
So in this sense C# and other statically typed OO languages are not true OO, sine thay lack "true" encapsulation.
Update: Its the new wave.. which suggest everything that we've been doing till now is passe.. Seems to be propping up quite a bit in podcasts and books.. Maybe this is what you heard.
Till now we've been concerned with static classes and not unleashed the power of object oriented development. We've been doing 'class based dev.' Classes are fixed/static templates to create objects. All objects of a class are created equal.
e.g. Just to illustrate what I've been babbling about... let me borrow a Ruby code snippet from PragProg screencast I just had the privilege of watching.
'Prototype based development' blurs the line between objects and classes.. there is no difference.
animal = Object.new # create a new instance of base Object
def animal.number_of_feet=(feet) # adding new methods to an Object instance. What?
#number_of_feet = feet
end
def animal.number_of_feet
#number_of_feet
end
cat = animal.clone #inherits 'number_of_feet' behavior from animal
cat.number_of_feet = 4
felix = cat.clone #inherits state of '4' and behavior from cat
puts felix.number_of_feet # outputs 4
The idea being its a more powerful way to inherit state and behavior than traditional class based inheritance. It gives you more flexibility and control in certain "special" scenarios (that I've yet to fathom). This allows things like Mix-ins (re using behavior without class inheritance)..
By challenging the basic primitives of how we think about problems, 'true OOP' is like 'the Matrix' in a way... You keep going WTF in a loop. Like this one.. where the base class of Container can be either an Array or a Hash based on which side of 0.5 the random number generated is.
class Container < (rand < 0.5 ? Array : Hash)
end
Ruby, javascript and the new brigade seem to be the ones pioneering this. I'm still out on this one... reading up and trying to make sense of this new phenomenon. Seems to be powerful.. too powerful.. Useful? I need my eyes opened a bit more. Interesting times.. these.
I've only listened to the first 6-7 minutes of the podcast that sparked your question. If their intent is to say that C# isn't a purely object-oriented language, that's actually correct. Everything in C# isn't an object (at least the primitives aren't, though boxing creates an object containing the same value). In Ruby, everything is an object. Daren and Ben seem to have covered all the bases in their discussion of "duck-typing", so I won't repeat it.
Whether or not this difference (everything an object versus everything not an object) is material/significant is a question I can't readily answer because I don't have sufficient depth in Ruby to compare it to C#. Those of you who on here who know Smalltalk (I don't, though I wish I did) have probably been looking at the Ruby movement with some amusement since it was the first pure OO language 30 years ago.
Maybe they are alluding to the difference between duck typing and class hierarchies?
if it walks like a duck and quacks like a duck, just pretend it's a duck and kick it.
In C#, Java etc. the compiler fusses a lot about: Are you allowed to do this operation on that object?
Object Oriented vs. Class Oriented could therefore mean: Does the language worry about objects or classes?
For instance: In Python, to implement an iterable object, you only need to supply a method __iter__() that returns an object that has a method named next(). That's all there is to it: No interface implementation (there is no such thing). No subclassing. Just talking like a duck / iterator.
EDIT: This post was upvoted while I rewrote everything. Sorry, won't ever do that again. The original content included advice to learn as many languages as possible and to nary worry about what the language doctors think / say about a language.
That was an abstract-podcast indeed!
But I see what they're getting at - they just dazzled by Ruby Sparkle. Ruby allows you to do things that C-based and Java programmers wouldn't even think of + combinations of those things let you achieve undreamt of possibilities.
Adding new methods to a built-in String class coz you feel like it, passing around unnamed blocks of code for others to execute, mixins... Conventional folks are not used to objects changing too far from the class template.
Its a whole new world out there for sure..
As for the C# guys not being OO enough... dont take it to heart.. Just take it as the stuff you speak when you are flabbergasted for words. Ruby does that to most people.
If I had to recommend one language for people to learn in the current decade.. it would be Ruby. I'm glad I did.. Although some people may claim Python. But its like my opinion.. man! :D
I don't think this is specifically about duck typing. For instance C# supports limited duck-typing already - an example would be that you can use foreach on any class that implements MoveNext and Current.
The concept of duck-typing is compatible with statically typed languages like Java and C#, it's basically an extension of reflection.
This is really the case of static vs dynamic typing. Both are proper-OO, in as much as there is such a thing. Outside of academia it's really not worth debating.
Rubbish code can be written in either. Great code can be written in either. There's absolutely nothing functional that one model can do that the other can't.
The real difference is in the nature of the coding done. Static types reduce freedom, but the advantage is that everyone knows what they're dealing with. The opportunity to change instances on the fly is very powerful, but the cost is that it becomes hard to know what you're deaing with.
For instance for Java or C# intellisense is easy - the IDE can quickly produce a drop list of possibilities. For Javascript or Ruby this becomes a lot harder.
For certain things, for instance producing an API that someone else will code with, there is a real advantage in static typing. For others, for instance rapidly producing prototypes, the advantage goes to dynamic.
It's worth having an understanding of both in your skills toolbox, but nowhere near as important as understanding the one you already use in real depth.
Object Oriented is a concept. This concept is based upon certain ideas. The technical names of these ideas (actually rather principles that evolved over the time and have not been there from the first hour) have already been given above, I'm not going to repeat them. I'm rather explaining this as simple and non-technical as I can.
The idea of OO programming is that there are objects. Objects are small independent entities. These entities may have embedded information or they may not. If they have such information, only the entity itself can access it or change it. The entities communicate with each other by sending messages between each other. Compare this to human beings. Human beings are independent entities, having internal data stored in their brain and the interact with each other by communicating (e.g. talking to each other). If you need knowledge from someone's else brain, you cannot directly access it, you must ask him a question and he may answer that to you, telling you what you wanted to know.
And that's basically it. This is real idea behind OO programming. Writing these entities, define the communication between them and have them interact together to form an application. This concept is not bound to any language. It's just a concept and if you write your code in C#, Java, or Ruby, that is not important. With some extra work this concept can even be done in pure C, even though it is a functional language but it offers everything you need for the concept.
Different languages have now adopted this concept of OO programming and of course the concepts are not always equal. Some languages allow what other languages forbid, for example. Now one of the concepts that involved is the concept of classes. Some languages have classes, some don't. A class is a blueprint how an object looks like. It defines the internal data storage of an object, it defines the messages an object can understand and if there is inheritance (which is not mandatory for OO programming!), classes also defines from which other class (or classes if multiple inheritance is allowed) this class inherits (and which properties if selective inheritance exists). Once you created such a blueprint you can now generate an unlimited amount of objects build according to this blueprint.
There are OO languages that have no classes, though. How are objects then build? Well, usually dynamically. E.g. you can create a new blank object and then dynamically add internal structure like instance variables or methods (messages) to it. Or you can duplicate an already existing object, with all its properties and then modify it. Or possibly merge two objects into a new one. Unlike class based languages these languages are very dynamic, as you can generate objects dynamically during runtime in ways not even you the developer has thought about when starting writing the code.
Usually this dynamic has a price: The more dynamic a language is the more memory (RAM) objects will waste and the slower everything gets as program flow is extremely dynamically as well and it's hard for a compiler to generate effective code if it has no chance to predict code or data flow. JIT compilers can optimize some parts of that during runtime, once they know the program flow, however as these languages are so dynamically, program flow can change at any time, forcing the JIT to throw away all compilation results and re-compile the same code over and over again.
But this is a tiny implementation detail - it has nothing to do with the basic OO principle. It is nowhere said that objects need to be dynamic or must be alterable during runtime. The Wikipedia says it pretty well:
Programming techniques may include
features such as information hiding,
data abstraction, encapsulation,
modularity, polymorphism, and
inheritance.
http://en.wikipedia.org/wiki/Object-oriented_programming
They may or they may not. This is all not mandatory. Mandatory is only the presence of objects and that they must have ways to interact with each other (otherwise objects would be pretty useless if they cannot interact with each other).
You asked: "Can someone show me an example of a wonderous thing I could do with ruby that I cannot do with c# and that exemplifies this different oop approach?"
One good example is active record, the ORM built into rails. The model classes are dynamically built at runtime, based on the database schema.
This is really probably getting down to what these people see others doing in c# and java as opposed to c# and java supporting OOP. Most languages cane be used in different programming paradigms. For example, you can write procedural code in c# and scheme, and you can do functional-style programming in java. It is more about what you are trying to do and what the language supports.
I'll take a stab at this.
Python and Ruby are duck-typed. To generate any maintainable code in these languages, you pretty much have to use test driven development. As such, it is very important for a developer to easily inject dependencies into their code without having to create a giant supporting framework.
Successful dependency-injection depends upon on having a pretty good object model. The two are sort of two sides of the same coin. If you really understand how to use OOP, then you should by default create designs where dependencies can be easily injected.
Because dependency injection is easier in dynamically typed languages, the Ruby/Python developers feel like their language understands the lessons of OO much better than other statically typed counterparts.