Public vs. Private? - oop

I don't really understand why it's generally good practice to make member variables and member functions private.
Is it for the sake of preventing people from screwing with things/more of an organizational tool?

Basically, yes, it's to prevent people from screwing with things.
Encapsulation (information hiding) is the term you're looking for.
By only publishing the bare minimum of information to the outside world, you're free to change the internals as much as you want.
For example, let's say you implement your phone book as an array of entries and don't hide that fact.
Someone then comes along and writes code which searches or manipulates your array without going through your "normal" interface. That means that, when you want to start using a linked list or some other more efficient data structure, their code will break, because it's used that information.
And that's your fault for publishing that information, not theirs for using it :-)
Classic examples are the setters and getters. You might think that you could just expose the temperature variable itself in a class so that a user could just do:
Location here = new Location();
int currTemp = here.temp;
But, what if you wanted to later have it actually web-scrape information from the Bureau of Meteorology whenever you asked for the temperature. If you'd encapsulated the information in the first place, the caller would just be doing:
int currTemp = here.getTemp();
and you could change the implementation of that method as much as you want. The only thing you have to preserve is the API (function name, arguments, return type and so on).
Interestingly, it's not just in code. Certain large companies will pepper their documentation with phrases like:
This technical information is for instructional purposes only and may change in future releases.
That allows them to deliver what the customer wants (the extra information) but doesn't lock them in to supporting it for all eternity.

The main reason is that you, the library developer, have insurance that nobody will be using parts of your code that you don't want to have to maintain.
Every public piece of your code can, and inevitably will get used by your customers. If you later discover that your design was actually terrible, and that version 2.0 should be written much better, then you realise that your paying customers actually want you to preserve all existing functionality, and you're locked in to maintaining backwards compatibility at the price of making better software.
By making as much of your code as possible private, you are unreservedly declaring that this code is nobody's business and that you can and will be able to rewrite it at any time.

It's to prevent people from screwing with things - but not from a security perspective.
Instead, it's intended to allow users of your class to only care about the public sections, leaving you (the author) free to modify the implementation (private) without worrying about breaking someone else's code.
For instance, most programming languages seem to store Strings as a char[] (an array of characters). If for some reason it was discovered that a linked list of nodes (each containing a single character) performed better, the internal implementation using the array could be switched, without (theoretically) breaking any code using the String class.

It's to present a clear code contract to anyone (you, someone else) who is using your object... separate "how to use it" from "how it works". This is known as Encapsulation.
On a side note, at least on .NET (probably on other platforms as well), it's not very hard for someone who really wants access to get to private portions of an object (in .NET, using reflection).

take the typical example of a counter. the thing the bodyguard at your night club is holding in his hands to make his punch harder and to count the people entering and leaving the club.
now the thing is defined like this:
public class Counter{
private int count = 0;
public void increment()
{
count++;
}
public void decrement()
{
count--;
}
}
As you can see, there are no setters/getters for count, because we don't want users (programmers) of this class, to be able to call myCounter.setCount(100), or even worse myCounter.Count -= 10; because that's not what this thing does, it goes up one for everyone entering and down for everyone leaving.

There is a scope for a lot of debate on this.
For example ... If a lot of .Net Framework was private, then this would prevent developers from screwing things up but at the same time it prevents devs from using the funcionality.
In my personal opinion, I would give preference to making methods public. But I would suggest to make use of the Facade pattern. In simple terms, you have a class that encapsulates complex functionality. For example, in the .net framework, the WebClient is a Facade that hides the complex http request/response logic.
Also ... Keep classes simple ... and you should have few public methods. That is a better abstraction than having large classes with lots of private methods

It is useful to know how an object s 'put together' have a look at this video on YouTube
http://www.youtube.com/watch?v=RcZAkBVNYTA&list=PL3FEE93A664B3B2E7&index=11&feature=plpp_video

Related

Polymorphism versus switch case tradeoffs

I haven't found any clear articles on this, but I was wondering about why polymorphism is the recommended design pattern over exhaustive switch case / pattern matching. I ask this because I've gotten a lot of heat from experienced developers for not using polymorphic classes, and it's been troubling me. I've personally had a terrible time with polymorphism and a wonderful time with switch cases, the reduction in abstractions and indirection makes readability of the code so much easier in my opinion. This is in direct contrast with books like "clean code" which are typically seen as industry standards.
Note: I use TypeScript, so the following examples may not apply in other languages, but I think the principle generally applies as long as you have exhaustive pattern matching / switch cases.
List the options
If you want to know what the possible values of an action, with an enum, switch case, this is trivial. For classes this requires some reflection magic
// definitely two actions here, I could even loop over them programmatically with basic primitives
enum Action {
A = 'a',
B = 'b',
}
Following the code
Dependency injection and abstract classes mean that jump to definition will never go where you want
function doLetterThing(myEnum: Action) {
switch (myEnum) {
case Action.A:
return;
case Action.B;
return;
default:
exhaustiveCheck(myEnum);
}
}
versus
function doLetterThing(action: BaseAction) {
action.doAction();
}
If I jump to definition for BaseAction or doAction I will end up on the abstract class, which doesn't help me debug the function or the implementation. If you have a dependency injection pattern with only a single class, this means that you can "guess" by going to the main class / function and looking for how "BaseAction" is instantiated and following that type to the place and scrolling to find the implementation. This seems generally like a bad UX for a developer though.
(small note about whether dependency injection is good, traits seem to do a good enough job in cases where they are necessary (though either done prematurely as a rule rather than as a necessity seems to lead to more difficult to follow code))
Write less code
This depends, but if have to define an extra abstract class for your base type, plus override all the function types, how is that less code than single line switch cases? With good types here if you add an option to the enum, your type checker will flag all the places you need to handle this which will usually involve adding 1 line each for the case and 1+ line for implementation. Compare this with polymorphic classes which you need to define a new class, which needs the new function syntax with the correct params and the opening and closing parens. In most cases, switch cases have less code and less lines.
Colocation
Everything for a type is in one place which is nice, but generally whenever I implement a function like this is I look for a similarly implemented function. With a switch case, it's extremely adjacent, with a derived class I would need to find and locate in another file or directory.
If I implemented a feature change such as trimming spaces off the ends of a string for one type, I would need to open all the class files to make sure if they implement something similar that it is implemented correctly in all of them. And if I forget, I might have different behaviour for different types without knowing. With a switch the co location makes this extremely obvious (though not foolproof)
Conclusion
Am I missing something? It doesn't make sense that we have these clear design principles that I basically can only find affirmative articles about but don't see any clear benefits, and serious downsides compared to some basic pattern matching style development
Consider the solid-principles, in particular OCP and DI.
To extend a switch case or enum and add new functionality in the future, you must modify the existing code. Modifying legacy code is risky and expensive. Risky because you may inadvertently introduce regression. Expensive because you have to learn (or re-learn) implementation details, and then re-test the legacy code (which presumably was working before you modified it).
Dependency on concrete implementations creates tight coupling and inhibits modularity. This makes code rigid and fragile, because a change in one place affects many dependents.
In addition, consider scalability. An abstraction supports any number of implementations, many of which are potentially unknown at the time the abstraction is created. A developer needn't understand or care about additional implementations. How many cases can a developer juggle in one switch, 10? 100?
Note this does not mean polymorphism (or OOP) is suitable for every class or application. For example, there are counterpoints in, Should every class implement an interface? When considering extensibility and scalability, there is an assumption that a code base will grow over time. If you're working with a few thousand lines of code, "enterprise-level" standards are going to feel very heavy. Likewise, coupling a few classes together when you only have a few classes won't be very noticeable.
Benefits of good design are realized years down the road when code is able to evolve in new directions.
I think you are missing the point. The main purpose of having a clean code is not to make your life easier while implementing the current feature, rather it makes your life easier in future when you are extending or maintaining the code.
In your example, you may feel implementing your two actions using switch case. But what happens if you need to add more actions in future? Using the abstract class, you can easily create a new action type and the caller doesn't need to be modified. But if you keep using switch case it will be lot more messier, especially for complex cases.
Also, following a better design pattern (DI in this case) will make the code easier to test. When you consider only easy cases, you may not find the usefulness of using proper design patterns. But if you think broader aspect, it really pays off.
"Base class" is against the Clean Code. There should not be a "Base class", not just for bad naming, also for composition over inheritance rule. So from now on, I will assume it is an interface in which other classes implement it, not extend (which is important for my example). First of all, I would like to see your concerns:
Answer for Concerns
This depends, but if have to define an extra abstract class for your
base type, plus override all the function types, how is that less code
than single line switch cases
I think "write less code" should not be character count. Then Ruby or GoLang or even Python beats the Java, obviously does not it? So I would not count the lines, parenthesis etc. instead code that you should test/maintain.
Everything for a type is in one place which is nice, but generally
whenever I implement a function like this is I look for a similarly
implemented function.
If "look for a similarly" means, having implementation together makes copy some parts from the similar function then we also have some clue here for refactoring. Having Implementation class differently has its own reason; their implementation is completely different. They may follow some pattern, lets see from Communication perspective; If we have Letter and Phone implementations, we should not need to look their implementation to implement one of them. So your assumption is wrong here, if you look to their code to implement new feature then your interface does not guide you for the new feature. Let's be more specific;
interface Communication {
sendMessage()
}
Letter implements Communication {
sendMessage() {
// get receiver
// get sender
// set message
// send message
}
}
Now we need Phone, so if we go to Letter implementation to get and idea to how to implement Phone then our interface does not enough for us to guide our implementation. Technically Phone and Letter is different to send a message. Then we need a Design pattern here, maybe Template Pattern? Let's see;
interface Communication {
default sendMessage() {
getMessageFactory().sendMessage(getSender(), getReceiver(), getBody())
}
getSender()
getReceiver()
getBody()
}
Letter implements Communication {
getSender() { returns sender }
getReceiver() {returns receiver }
getBody() {returns body}
getMessageFactory {returns LetterMessageFactory}
}
Now when we need to implement Phone we don't need to look the details of other implementations. We exactly now what we need to return and also our Communication interface's default method handles how to send the message.
If I implemented a feature change such as trimming spaces off the ends
of a string for one type, I would need to open all the class files to
make sure if they implement something similar that it is implemented
correctly in all of them...
So if there is a "feature change" it should be only its implemented class, not in all classes. You should not change all of the implementations. Or if it is same implementation in all of them, then why each implements it differently? It should be kept as the default method in their interface. Then if feature change required, only default method is changed and you should update your implementation and test in one place.
These are the main points that I wanted to answer your concerns. But I think the main point is you don't get the benefit. I was also struggling before I work on a big project that other teams need to extend my features. I will divide benefits to topics with extreme examples which may be more helpful to understand:
Easy to read
Normally when you see a function, you should not feel to go its implementation to understand what is happening there. It should be self-explanatory. Based on this fact; action.doAction(); -> or lets say communication.sendMessage() if they implement Communicate interface. I don't need to go for its base class, search for implementations etc. for debugging. Even implementing class is "Letter" or "Phone" I know that they send message, I don't need their implementation details. So I don't want to see all implemented classes like in your example "switch Letter; Phone.." etc. In your example doLetterThing responsible for one thing (doAction), since all of them do same thing, then why you are showing your developer all these cases?. They are just making the code harder to read.
Easy to extend
Imagine that you are extending a big project where you don't have an access to their source(I want to give extreme example to show its benefit easier). In the java world, I can say you are implementing SPI (Service Provider Interface). I can show you 2 example for this, https://github.com/apereo/cas and https://github.com/keycloak/keycloak where you can see that interface and implementations are separated and you just implement new behavior when it is required, no need to touch the original source. Why this is important? Imagine the following scenario again;
Let's suppose that Keycloak calls communication.sendMessage(). They don't know implementations in build time. If you extend Keycloak in this case, you can have your own class that implements Communication interface, let's say "Computer". Know if you have your SPI in the classpath, Keycloak reads it and calls your computer.sendMessage(). We did not touch the source code but extended the capabilities of Message Handler class. We can't achieve this if we coded against switch cases without touching the source.

Why are helperclasses anti pattern

A recent question here made me rethink this whole helper classes are anti pattern thing.
asawyer pointed out a few links in the comments to that question:
Helper classes is an anti-pattern.
While those links go into detail how helperclasses collide with the well known principles of oop some things are still unclear to me.
For example "Do not repeat yourself". How can you acchieve this without creating some sort of helper?
I thought you could derive a certain type and provide some features for it.
But I bellieve that isnt practical all the time.
Lets take a look at the following example,
please keep in mind I tried not to use any higher language features nor "languagespecific" stuff. So this might been ugly nested and not optimal...
//Check if the string is full of whitepsaces
bool allWhiteSpace = true;
if(input == null || input.Length == 0)
allWhiteSpace = false;
else
{
foreach(char c in input)
{
if( c != ' ')
{
allWhiteSpace = false;
break;
}
}
}
Lets create a bad helper class called StringHelper, the code becomes shorter:
bool isAllWhiteSpace = StringHelper.IsAllWhiteSpace(input);
So since this isnt the only time we need to check this, i guess "Do not repeat yourself" is fullfilled here.
How do we acchieve this without a helper ? Considering that this piece of Code isn't bound to a single class?
Do we need to inherit string and call it BetterString ?
bool allWhiteSpace = better.IsAllWhiteSpace;
or do we create a class? StringChecker
StringChecker checker = new StringChecker();
bool allWhiteSpace = checker.IsAllwhiteSpace(input);
So how do we acchieve this?
Some languages (e.g. C#) allow the use of ExtensionMethods. Do they count as helperclasses aswell? I tend to prefer those over helperclasses.
Helper classes may be bad (there are always exceptions) because a well-designed OO system will have clearly understood responsibilities for each class. For example, a List is responsible for managing an ordered list of items. Some people new to OOD who discover that a class has methods to do stuff with its data sometimes ask "why doesn't List have a dispayOnGUI method (or similar such thing)?". The answer is that it is not the responsibility of List to be concerned with the GUI.
If you call a class a "Helper" it really doesn't say anything about what that class is supposed to do.
A typical scenario is that there will be some class and someone decides it is getting too big and carves it up into two smaller classes, one of which is a helper. It often isn't really clear what methods should go in the helper and what methods should stay in the original class: the responsibility of the helper is not defined.
It is hard to explain unless you are experienced with OOD, but let me show by an analogy. By the way, I find this analogy extremely powerful:
Imagine you have a large team in which there are members with different job designations: e.g, front-end developers, back-end developers, testers, analysts, project managers, support engineers, integration specialists, etc. (as you like).
Each role you can think of as a class: it has certain responsibilities and the people fulfilling those responsibilities hopefully have the necessary knowledge to execute them. These roles will interact in a similar way to classes interacting.
Now imagine it is discovered that the back-end developers find their job too complicated. You can hire more if it is simply a throughput problem, but perhaps the problem is that the task requires too much knowledge across too many domains. It is decided to split up the back-end developer role by creating a new role, and maybe hire new people to fill it.
How helpful would it be if that new job description was "Back-end developer helper"? Not very ... the applicants are likely to be given a haphazard set of tasks, they may get confused about what they are supposed to do, their co-workers may not understand what they are supposed to do.
More seriously, the knowledge of the helpers may have to be exactly the same as the original developers as we haven't really narrowed down the actual responsibilities.
So "Helper" isn't really saying anything in terms of defining what the responsibilities of the new role are. Instead, it would be better to split-off, for example, the database part of the role, so "Back-end developer" is split into "Back-end developer" and "Database layer developer".
Calling a class a helper has the same problem and the solution is the same solution. You should think more about what the responsibilities of the new class should be. Ideally, it should not just shave-off some methods, but should also take some data with it that it is responsible for managing and thereby create a solution that is genuinely simpler to understand piece by piece than the original large class, rather than simply placing the same complicated logic in two different places.
I have found in some cases that a helper class is well designed, but all it lacks is a good name. In this case, calling it "Builder" or "Formatter" or "Context" instead of "Helper" immediately makes the solution far easier to understand.
Disclaimer: the following answer is based on my own experience and I'm not making a point of right and wrong.
IMHO, Helper classes are neither good nor bad, it all depends on your business/domain logic and your software architecture.
Here's Why:
lets say that we need to implement the idea of white spaces you proposed, so first I will ask my self.
When would I need to check against white spaces?
Hence, imagine the following scenario: a blogging system with Users, Posts, Comments. Thus, I would have three Classes:
Class User{}
Class Post{}
Class Comment{}
each class would have some field that is a string type. Anyway, I would need to validate these fields so I would create something like:
Class UserValidator{}
Class PostValidator{}
Class CommentValidator{}
and I would place my validation policies in those three classes. But WAIT! all of the aforementioned classes needs a check against null or all whitespaces? Ummmm....
the best solution is to take it higher in the tree and put it in some Father class called Validator:
Class Validator{
//some code
bool function is_all_whitespaces(){}
}
so, if you need the function is_all_whitespaces(){} to be abstract ( with class validator being abstract too) or turn it into an interface that would be another issue and it depends on your way of thinking mostly.
back to the point in this case I would have my classes ( for the sake of giving an example ) look like:
Class UserValidator inherits Validator{}
Class PostValidator inherits Validator{}
Class CommentValidator inherits Validator{}
in this case I really don't need the helper at all. but lets say that you have a function called multiD_array_group_by_key
and you are using it in different positions, but you don't like to have it in some OOP structured place you can have in some ArrayHelper but by that you are a step behind from being fully object oriented.

Privacy in static languages

While I understand the value of implementation/interface distinction, I fail to see why most OO systems issue errors on access to private members.
I indeed wouldn't want to access private members in my main program.
But I would like to have that access for tests and debugging.
Is there any good reason whatsoever to issue errors and not warnings? The way I see it, I am forced to either write code I can test, but that doesn't utilize language support for interfaces, or use the language support, but have difficulty in testing.
EDIT
For those who suggested using public interfaces. You can, but it's less convinent.
On a conceptual level I find privacy that doesn't care about who or when quite crude.
Friend classes seem like a reasonable solutution. Another might be an 'all public' compiler switch.
The way I see it, I am forced to either write code I can test, but that doesn't utilize language support for interfaces, or use the language support, but have difficulty in testing.
Why do you need to access private variables and functions? Either they are called (however indirectly) by public functions at some point, or they are just inaccessible pieces of code that shouldn't be there at all because there is no way to invoke them. Think about it, if the private method is completely impossible to invoke from outside the class in any way at all, is it ever going to be run?
If you really want to test a private method anyway, you can pull it into its own class. Plus, if its so complex that it really needs to best tested individually, there's a chance that it deserves to have its a chance in the first place. The other option is to just make it public whenever you need/want to test it, but not actually change the 'real' code (leaving it private). As others have said, some languages also have features that help you test these methods by exposing them slightly more, such as friend in C++, internal in C#, and package-private in Java. Occasionally the IDE's themselves even help out.
Is there any good reason whatsoever to issue errors and not warnings?
One big reason is not so you can't call them, its so other people can't call them. Picture this, you're writing a library that's going to be used by a substantial number of clients. You've marked everything they shouldn't need to call private, and all the functionality they do need is public. Programmers go ahead and start using your library, write a bunch of code with it, and produce happy customers both out of themselves and their own clients.
A few months later, you decide to spice up your massively successful library, and find that you need to do a bit of refactoring. Hence, you [rename, add/remove a parameter from, delete] some of your private methods, but are careful to keep all of your public method's interfaces exactly the same to make upgrading a seamless process. BUT in this universe, compilers only issue warnings and not errors when you access a private variable, and several of your client programmers wrote code that calls those private methods. Now, when they try to upgrade to your new version of your library, they get a bunch of real errors because they can't call those private methods anymore. Now they either have to spend time finding out what went wrong with the code and rewrite potentially large parts of it that they don't remember anything about (did I mention that this is two years in the future?). Hence, they have to completely relearn how to use your library and rewrite their client code, which is far from fun for anybody. Now they're rather displeased that you were so inconsiderate as to literally break all of their code with your upgrade and make their lives far more difficult.
Guess what, when they were fixing the code, they researched and called your new private methods, so if you ever decide to change their interface when you issue an upgrade, the whole cycle starts over again. What was slightly more convenient for you just got you a bunch of unhappy customers.
Wait, weren't they idiots for calling my private methods? Why didn't they look at the warnings? This is their fault, not mine!
Well, yes, it is their fault and they could have prevented the problem by taking care to note those warnings. But not everybody is a code-quality fanatic who wants to fix and understand warnings, and a substantial amount of people just ignore them. The thing is, you could have prevented the whole thing yourself if compilers issued errors for trying to access private variables and methods instead of warnings, because otherwise the private keyword might as well not exist at all. You may have lost a little time because those methods are harder to test, but you have gained the power to keep less intelligent people from misusing your code and blaming you for any problems it causes them down the road.
One of my favorite tenets of software development (and product design in general) is that things should be easy to use correctly and hard or impossible to use incorrectly. True private members are the embodiment of this advice because they literally make your code impossible to use correctly.
[hypothetical retort:] Well, the people using my code should be smart enough to figure it out. All I'm asking them to due is just spend a little extra time to use the code correctly.
So you are consciously refusing to spend the time necessary to improve the quality of your code and make it easier to use? Then I don't want anything to do with what your code. Obviously your time is more important than that of your clients, so I'll take the 2.5 seconds it requires to close the web page for your project and click on the next Google result. There are a lot more private members of the libraries you use than you might think, and the glorious thing is that you don't have to spend even a millisecond of your time worrying about them because they're totally hidden from you and would only distract from the easier and better way of doing things that is provided in the public interface. If everything was public or wimpy-warning-issuing private, you'd have to sift through a greater amount of functions before you actually found what you wanted.
Whenever you type private before a member function, you have just given yourself the power to change it in any way you want at any point in the future because nobody can touch it but you. The second someone else tries to access it they will get a show-stopping error because the compiler has your back and won't let them do anything stupid with your code when you've already provided perfectly everything they need in a much more usable form in your public interface.
Yes, it will make it slightly harder to test in the now, but it has also ensured that you won't have to worry about it in the future when you refactor and made your code a lot easier for other people to use. Go ahead and make it public temporarily (I kind of like your 'all-public compiler switch idea :), but don't forget to switch it back when you're done you and your clients can all have the joy working with simpler and more adaptable code. :D
The obvious reason would be that all too many people seem to ignore many (all?) warnings. In some languages (e.g., Python) it's pretty much as you've said -- something being "private" is basically advice against outside code using it directly, but the compiler doesn't enforce that.
As for how much sense that makes, I suspect it varies between languages -- in something like C++, the attitude toward them ("protect against Murphy, not Machiavelli") could be seen as justification for its being a warning instead of an error.
I think it's safe to say that in Ada, that would receive a much cooler reception, to say the least (and that's not to say that I think it would be received warmly by C++ programmers either, just that they might not hate the idea quite as much as most Ada programmers would).
On the other hand, I have to wonder about the design of a class that can't be tested (at least reasonably well) via its external interface. On the rare (should be rare, anyway) occasion that you can't, I think making the test class a friend (in C++ parlance, though many others have similar concepts) would be fairly easy to justify.
There are several advantages to having complete encapsulation:
Security. Strongly-typed OOP languages with strong encapsulation can have certain guarantees about the security of the data in the program. The Java language was designed with safety and security in mind, so certain library classes (for example, String or SecurityManager) cannot have their fields accessed. This prevents malicious code from doing Bad Things to these objects, and allows code to assume the objects are safe.
Maintainability. One of the major reasons to keep private fields and methods private is to allow the implementation to change seamlessly; as long as no updates are made to the public interface, code using the updated class can work with no changes. If you allow access to private fields and then change the implementation, you risk breaking an unbounded amount of code.
Stability/Verifiability/Testability. Classes typically impose invariants on their fields - for example, an implementation of a dynamic array might require that a field tracking how much space is used actually correspond to the total number of elements. Allowing people to arbitrarily access private fields, even with a warning, makes it possible to break these invariants. Without the ability to count on the invariants, it becomes difficult or impossible to reason about the correctness of the code. Additionally, if you do break an invariant somewhere in the code, you would conceivably have to look at every piece of code in the program that has access to the object, since any of them might be accessing the private field. With strong encapsulation, these invariants can't break, and with semiencapsulation through friends or package-private mechanisms, the amount of code to look at is bounded.
As for your question about testing - many languages allow encapsulation to be broken in certain cases; C++ has friend, Java has package-private, etc., so that the class can say "normally you can't touch these, but exceptions can be made." You can then make your testing code a friend or in the same package as the main class in order to test it more thoroughly.
Hope this helps!
The way that I see it is that you need to forget about accessing anything in an object unless you have a way of doing that in that object's interface. I think a correct OO system ought to issue errors (and not warnings) if you are attempting to directly access implementation specific private members. I attended a good talk by Kevlin Henney on this subject recently and I found it very useful, a copy may be viewed here: http://www.infoq.com/presentations/It-Is-Possible-to-Do-OOP-in-Java (note that it is mainly about java but also includes comparisons to other OO systems)
For testing most time I find that most of the code under test is covered by public interface calls. It is only on rare occasions that I need to employ something like runtime reflection to get absolutely 100% coverage.
I was about to post, "Strong enforcement of encapsulation keeps your boss from stepping on your private members." until I realized how that might sound wrong, but on second thought It's
probably just about right.

Using SOA principles over OOD in non-service code

Our architect has spoken about using SOA techniques throughout our codebase, even on interfaces that are not actually hosted as a service. One of his requests is that we design our interface methods so that we make no assumptions about the actual implementation. So if we have a method that takes in an object and needs to update a property on that object, we explictly need to return the object from the method. Otherwise we would be relying on the fact that Something is a reference type and c# allows us to update properties on a reference type by default.
So:
public void SaveSomething(Something something)
{
//save to database
something.SomethingID = 42;
}
becomes:
public Something SaveSomething(Something something)
{
//save to database
return new Something
{
//all properties here including new primary key from db
};
}
I can't really get my head around the benefits of this approach and was wondering if anyone could help?
Is this a common approach?
I think your architect is trying to get your code to have fewer side effects. In your specific example, there isn't a benefit. In many, many cases, your architect would be right, and you can design large parts of your application without side effects, but one place this cannot happen is during operations against a database.
What you need to do is get familiar with functional programming, and prepare for your conversations about cases like these with your architect. Remember his/her intentions are most likely good, but specific cases are YOUR domain. In this case, the side effect is the point, and you would most likely want a return type of bool to indicate success, but returning a new type doesn't make sense.
Show your architect that you understand limiting side effects, but certain side effects must be allowed (database, UI, network access, et cetera), and you will likely find that he or she agrees with you. Find a way to isolate the desired side effects and make them clear to him or her, and it will help your case. Your architect will probably appreciate it if you do this in the spirit of collaboration (not trying to shoot holes in his or her plan).
A couple resources for FP:
A great tutorial on Functional
Programming
Wikipedia's entry on Functional programming
Good luck, I hope this helps.

What is the point of defining Access Modifiers?

I understand the differences between them (at least in C#). I know the effects they have on the elements to which they are assigned. What I don't understand is why it is important to implement them - why not have everything Public?
The material I read on the subject usually goes on about how classes and methods shouldn't have unnecessary access to others, but I've yet to come across an example of why/how that would be a bad thing. It seems like a security thing, but I'm the programmer; I create the methods and define what they will (or will not) do. Why would I spend all the effort to write a function which tried to change a variable it shouldn't, or tried to read information in another class, if that would be bad?
I apologize if this is a dumb question. It's just something I ran into on the first articles I ever read on OOP, and I've never felt like it really clicked.
I'm the programmer is a correct assumption only if you're the only programmer.
In many cases, other programmers work with the first programmer's code. They use it in ways he didn't intend by fiddling with the values of fields they shouldn't, and they create a hack that works, but breaks when the producer of the original code changes it.
OOP is about creating libraries with well-defined contracts. If all your variables are public and accessible to others, then the "contract" theoretically includes every field in the object (and its sub-objects), so it becomes much harder to build a new, different implementation that still honors the original contract.
Also, the more "moving parts" of your object are exposed, the easier it is for a user of your class to manipulate it incorrectly.
You probably don't need this, but here's an example I consider amusing:
Say you sell a car with no hood over the engine compartment. Come nighttime, the driver turns on the lights. He gets to his destination, gets out of the car and then remembers he left the light on. He's too lazy to unlock the car's door, so he pulls the wire to the lights out from where it's attached to the battery. This works fine - the light is out. However, because he didn't use the intended mechanism, he finds himself with a problem next time he's driving in the dark.
Living in the USA (go ahead, downvote me!), he refuses to take responsibility for his incorrect use of the car's innards, and sues you, the manufacturer for creating a product that's unsafe to drive in the dark because the lights can't be reliably turned on after having been turned off.
This is why all cars have hoods over their engine compartments :)
A more serious example: You create a Fraction class, with a numerator and denominator field and a bunch of methods to manipulate fractions. Your constructor doesn't let its caller create a fraction with a 0 denominator, but since your fields are public, it's easy for a user to set the denominator of an existing (valid) fraction to 0, and hilarity ensues.
First, nothing in the language forces you to use access modifiers - you are free to make everything public in your class if you wish. However, there are some compelling reasons for using them. Here's my perspective.
Hiding the internals of how your class operates allows you to protect that class from unintended uses. While you may be the creator of the class, in many cases you will not be the only consumer - or even maintainer. Hiding internal state protects the class for people who may not understand its workings as well as you. Making everything public creates the temptation to "tweak" the internal state or internal behavior when the class isn't acting the way you may want - rather than actually correcting the public interface of internal implementation. This is the road to ruin.
Hiding internals helps to de-clutter the namespace, and allows tools like Intellisense to display only the relevant and meaningful methods/properties/fields. Don't discount tools like Intellisense - they are a powerful means for developers to quickly identify what they can do with your class.
Hiding internals allows you to structure an interface appropriate for the problem the class is solving. Exposing all of the internals (which often substantially outnumber the exposed interface) makes it hard to later understand what the class is trying to solve.
Hiding internals allows you to focus your testing on the appropriate portion - the public interface. When all methods/properties of a class are public, the number of permutations you must potentially test increases significantly - since any particular call path becomes possible.
Hiding internals helps you control (enforce) the call paths through your class. This makes it easier to ensure that your consumers understand what your class can be asked to do - and when. Typically, there are only a few paths through your code that are meaningful and useful. Allowing a consumer to take any path makes it more likely that they will not get meaningful results - and will interpret that as your code being buggy. Limiting how your consumers can use your class actually frees them to use it correctly.
Hiding the internal implementation frees you to change it with the knowledge that it will not adversely impact consumers of your class - so long as your public interface remains unchanged. If you decide to use a dictionary rather than a list internally - no one should care. But if you made all the internals of your class available, someone could write code that depends on the fact that your internally use a list. Imagine having to change all of the consumers when you want to change such choices about your implementation. The golden rule is: consumers of a class should not care how the class does what it does.
It is primarily a hiding and sharing thing. You may produce and use all your own code, but other people provide libraries, etc. to be used more widely.
Making things non-public allows you to explicitly define the external interface of your class. The non-public stuff is not part of the external interface, which means you can change anything you want internally without affecting anyone using the external interface,
You only want to expose the API and keep everything else hidden. Why?
Ok lets assume you want to make an awesome Matrix library so you make
class Matrix {
public Object[][] data //data your matrix storages
...
public Object[] getRow()
}
By default any other programmer that use your library will want to maximize the speed of his program by tapping into the underlying structure.
//Someone else's function
Object one() {data[0][0]}
Now, you discover that using list to emulate the matrix will increase performance so you change data from
Object[][] data => Object[] data
causes Object one() to break. In other words by changing your implementation you broke backward compatibility :-(
By encapsulating you divide internal implementation from external interface (achieved with a private modifier).
That way you can change implementation as much as possible without breaking backward compatibility :D Profit!!!
Of course if you are the only programmer that is ever going to modify or use that class you might as well as keep it public.
Note: There are other major benefits for encapsulating your stuff, this is just one of many. See Encapsulation for more details
I think the best reason for this is to provide layers of abstraction on your code.
As your application grows, you will need to have your objects interacting with other objects. Having publicly modifiable fields makes it harder to wrap your head around your entire application.
Limiting what you make public on your classes makes it easier to abstract your design so you can understand each layer of your code.
For some classes, it may seem ridiculous to have private members, with a bunch of methods that just set and get those values. The reason for it is that let's say you have a class where the members are public and directly accessible:
class A
{
public int i;
....
}
And now you go on using that in a bunch of code you wrote. Now after writing a bunch of code that directly accesses i and now you realize that i should have some constraints on it, like i should always be >= 0 and less than 100 (for argument's sake).
Now, you could go through all of your code where you used i and check for this constraint, but you could just add a public setI method that would do it for you:
class A
{
private int i;
public int I
{
get {return i;}
set
{
if (value >= 0 && value < 100)
i = value;
else
throw some exception...
}
}
}
This hides all of that error checking. While the example is trite, situations like these come up quite often.
It is not related to security at all.
Access modifers and scope are all about structure, layers, organization, and communication.
If you are the only programmer, it is probably fine until you have so much code even you can't remember. At that point, it's just like a team environment - the access modifiers and the structure of the code guide you to stay within the architecture.