Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I came across different interview where question was asked to me why encapsulation is used? Whose requirement actually is encapsulation? Is it for users of program? Or is it for co-workers? Or is it to protect code from hackers?
Encapsulation helps in isolating implementation details from the behavior exposed to clients of a class (other classes/functions that are using this class), and gives you more control over coupling in your code. Consider this example, similar to the one in Robert Martin's book Clean Code:
public class Car
{
//...
public float GetFuelPercentage() { /* ... */ };
//...
private float gasoline;
//...
}
Note that the client using the function which gives you the amount of fuel in the car doesn't care what type of fuel does the car use. This abstraction separates the concern (Amount of fuel) from unimportant (in this context) detail: whether it is gas, oil or anything else.
The second thing is that author of the class is free to do anything they want with the internals of the class, for example changing gasoline to oil, and other things, as long as they don't change its behaviour. This is thanks to the fact, that they can be sure that no one depends on these details, because they are private. The fewer dependencies there are in the code the more flexible and easier to maintain it is.
One other thing, correctly noted in the underrated answer by utnapistim: low coupling also helps in testing the code, and maintaining those tests. The less complicated class's interface is, the easier to test it. Without encapsulation, with everything exposed it would be hard to comprehend what to test and how.
To reiterate some discussions in the comments:
No, encapsulation is not the most important thing in OOP. I'd dare even to say that it's not very important. Important things are these encouraged by encapsulation - like loose coupling. But it is not essential - a careful developer can maintain loose coupling without encapsulating variables etc. As pointed out by vlastachu, Python is a good example of a language which does not have mechanisms to enforce encapsulation, yet it is still feasible for OOP.
No, hiding your fields behind accessors is not encapsulation. If the only thing you've done is write "private" in front of variables and then mindlessly provide get/set pair for each of them, then in fact they are not encapsulated. Someone in a distant place in code can still meddle with internals of your class, and can still depend on them (well, it is of course a bit better that they depend on a method, not on a field).
No, encapsulation's primary goal is not to avoid mistakes. Primary goals are at least similar to those listed above, and thinking that encapsulation will defend you from making mistakes is naive. There are just lots of other ways to make a mistake beside altering a private variable. And altering a private variable is not so hard to find and fix. Again - Python is a good example for sake of this argument, as it can have encapsulation without enforcing it.
Encapsulation prevents people who work on your code from making mistakes, by making sure that they only access things they're supposed to access.
At least in most OO languages, encapsulation is roughly equivalent to the lock on the door of a bathroom.
It's not intended to keep anybody out if they really insist on entering.
It is intended as a courtesy to let people know that entering will lead mostly to:
embarrassment, and
a stinking mess.
Encapsulation allows you to formalize your interfaces, separating levels of abstraction (i.e. "application logic accesses IO code only in this and this way").
This in turn, allows you to change the implementation of a module (the data and algorithms inside the module) without changing the interface (and affecting client code).
This ability to modify modules independently of each other, improves your ability to measure your performance and make predictions in a project's deadlines.
It also allows you to test modules separately and reuse them in other projects (because encapsulation also lowers inter-dependencies and improves modularity of your code).
Not enforcing encapsulation tends to lead to a project's failure (the problem grows with the complexity of the project).
I architected a fast-track project. Encapsulation reduces the propagation of change through the system. Changes cost time and money.
When code is not encapsulated, a person has to search many files to find where to make the change(s). Adversely, there is the question of "Did I find all the places?" and the other point "what effect to the entire system to do all of these scatter changes have?"
I'm working on an embedded medical device and quality is imperative. Also, all changes must be documented, reviewed, unit tested and finally a system test performed. By using encapsulation, we can reduce the number of changes and their locality, reducing the number of files that must be retested.
Related
It looks to me like Bob Martin needed something starting with O to make SOLID and found in some old book this (possibly useless) Open/Closed principle.
How can Open/Closed co-exists with the Single Responsibility, that states a class should have a single reason for change?
If I want to follow Open/Closed in a long living system, am I supposed to have a chain of dozens/hundreds classes, each extending the previous?
The Open/closed principle implies that you can create systems in which new features are added by adding new code as opposed to changing old code. In order to be perfectly conform to the open/closed principle one must have perfect foresight. For in order to create a system that is fully open to extension and closed to all modification one must be able to perfectly predict the future. One must know in advance what new features the customer will request in order to put extension points to the code.
Having said that we could develop systems that conform well enough to the open/closed principle. By using an iterative process with lots of feedback and refactoring we can improve the parts of the system that change most often by making them open to extension and closed to modification.
As Bob Martin says in one of his lectures: "We cannot completely conform to the open/closed principle. It doesn't mean we should simply give up on the open/closed principle entirely. It may be difficult to make the entire systems to conform to the open/closed principle but it's not difficult to make functions or classes or smaller components to conform to the open/closed principle"
I also came here wondering about the whole "Closed for Modification" bit but I've come to a conclusion that I feel is best demonstrated with an example:
public class FixedSizeCache {
private int maxCacheSize = 8192;
public FixedSizeCache() {
// ...
}
// ...
}
The above example doesn't violate the Single Responsibility Principle but it violates the Open/Closed Principle in a fairly obvious way: whenever you need a FixedSizeCache of a different fixed size you would need to modify the source of the class.
In other words, we should strive to write code that doesn't violate the Open/Closed Principle in obvious ways but this doesn't mean we need to write code that is utterly locked in stone never to be modified (because business requirements change and that should be reflected in our code).
But if the same business requirement changes 7 times and you need to modify your code 7 times, you're probably violating the Open/Closed Principle and are due for a refactoring.
Beautifully formulated question!
If I want to follow Open/Closed in a long living system, am I supposed to have a chain of dozens/hundreds classes, each extending the previous?
This is exaclty what I observed some time ago on a "long living system"; dozens of classes extending by small bits the superclasses. On the other hand modern construction of python goes exactly egainst this principle, and I had the feeling the violation of Open/Closed of modern python was the root cause for the usefullnes and simplicity of many of python's libraries. So I checked SO, and found your question. Well formulated!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Why do in most programming languages do you get the ability to have private and or public methods/functions classes and properties?
Does it make much of a difrence to let's say.. have all classes, methods and properties be public?
I know that if you have all your methods, classe ans properties set to private nothing will work.
So at least I know that much.
But does the distinction between the two matter? What's the big deal if one class knows another Class "that is meant to be private" exists?
When you make something public, you enter a contract with the user class: "Hey, this is, what I offer, use it or not." Changing the public interface is expensive, because you have to change all code using that public interface, too. Think of a developer of a framework like Cocoa used by thousands of developers. If you change one public methods, for example removing one, thousands of apps break. They have to be changed, too.
So making everything public simply means that you cannot change anything anymore. (You can, but the people will get angry at one point.)
Let's think of having a class implementing a list. There is a method to sort it: sortListWithKey. You make that public because you want the users of the class to get a sorted list. This is good.
There are several algorithms for sorting. Let's say, you implement one that needs to calculate the meridian (the middle element). You need this method internally for your sorting algorithm. So it is enough, to implement it privately. Changing the whole structure of data holding including the implemented sorting algorithm is no problem and will not break existing code using that class.
But if you made the meridian method public (remember: you implemented it, because you needed it internally), you still have to keep it, even the new sorting algorithm does not need it. You cannot remove it anymore, even with the new structure it is very hard (and/or expensive) to keep the method.
So make that part of your implementation public that is useful for the users, but no more. Otherwise you shackle yourself.
If humans had perfect memory, documentation and communication skills, and made no mistakes, then there might not be a useful difference. But using or changing something from the wrong file and then forgetting about it (or not documenting it clearly for the rest of the team, or yourself in the future) is too common a cause of hard-to-find bugs.
Marking things private makes it a bit more work to create the same types of bugs, and thus less likely that lazy/sleepy programmers will do all that extra work just to mess up the application.
In computer science it is called information hiding. You, as a programmer, want to offer only necessary methods or properties to other programmers which will use your public API and this is the way how you can achieve so-called low coupling between modules.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
So I've been building an app for a while now and over time one of my classes (a UIViewController) has grown quite large, doing quite a lot of stuff with one view. As such, I've had to make it conform to a lot of protocols (12 now). I'm now starting to worry that I'm conforming to a few too many.
Is there a problem with conforming to a lot of protocols in one class? Will there be a performance hit or anything like that? I'm not looking for personal opinions, but rather things like possible performance issues or if it's against a best-practices/style guide document (such as one distributed by Apple).
Some of the protocols my class conforms to (excluding a few custom protocols):
UIAlertViewDelegate, UIActionSheetDelegate, MFMailComposeViewControllerDelegate,
UITableViewDataSource, UITableViewDelegate, UITextFieldDelegate,
UIPopoverControllerDelegate, UIBarPositioningDelegate,
ABPeoplePickerNavigationControllerDelegate
Edit:
As noted by a few of the answers, the majority of the protocols are UI related - and the view itself is not cluttered. UIAlertViewDelegate, UIActionSheetDelegate, MFMailComposeViewControllerDelegate, and ABPeoplePickerNavigationControllerDelegate are all displayed separately (modally in the last two cases). UITableViewDataSource and UITableViewDelegate are simply to handle a table. UITextFieldDelegate is used to manage searching for the field (in order to allow me to check if the entered text is 'valid'). UIBarPositioningDelegate is required for iOS7 for appearance.
The class isn't overly large and only handles itself, without too much coupling (at least, in my opinion).
From official apple documentation:
Tip: If you find yourself adopting a large number of protocols in a
class, it may be a sign that you need to refactor an overly-complex
class by splitting the necessary behavior across multiple smaller
classes, each with clearly-defined responsibilities. One relatively
common pitfall for new OS X and iOS developers is to use a single
application delegate class to contain the majority of an application’s
functionality (managing underlying data structures, serving the data
to multiple user interface elements, as well as responding to gestures
and other user interaction). As complexity increases, the class
becomes more difficult to maintain.
So the answer is that it is not bad to do so in terms of performance, but it is not a good practice.
https://developer.apple.com/library/ios/documentation/cocoa/conceptual/ProgrammingWithObjectiveC/WorkingwithProtocols/WorkingwithProtocols.html
There is not penalty in having a class conforming to vary many protocols -- apart from negligible higher compilation time/space (due to having bigger compilation tables for that class, but this is really not meaningful, IMO) and possibly a class readability issue.
Indeed, when you specify that a class conform to a protocol you are simply telling the compiler to add all the methods declared inside of that protocol to the class interface. This has practically no impact either on performance or memory occupation.
The main issue you have is, IMO, about class readability. As in general you have an issue when your class offers too many public methods. This makes difficult to understand what the class is for. Refactoring would be strongly suggested, but again, this has nothing to do with performance.
From what you pasted, it seems that the protocols you are adding to your class seemingly relate to UI elements that you are managing inside of that class. Then, either the view you class manages is overly complex, or you need those delegates.
Best practice for programming in general is to separate concerns and functions as much as possible. To me, it looks like your app consists of one crowded controller. Why is that? Too much code in one class or file makes it harder to debug and maintain, plus your interface must be really cluttered if you have all those elements in them. I can't give specific advice without seeing more code, but in general, you have too many thing in one controller.
I don't think this cause any performance penality or error.
But it seems that you class is very coupled, which is a bad design practice, but that's not necessary true, how much line of code you class have?
If that's a lot and you fell that you class is doing some work that it shouldn't you can refractor your code.
I cannot tell you about performance without actually seeing code, but its possible that if somehow the two or more protocols declare identically named methods (it has happened to me) then you will get unexpected behavior which could include degraded performance. Suppose that two protocols define a basic method such as - (void)update and now you are only implementing one update method, but both protocols want to use it so at best you are getting too many calls to update (degraded performance). if one of the protocols defines the method as #optional then its not always immediately obvious something bad is happening.
What I can tell you is that this kind of monolithic design is universally considered poor. It sound like you have a big ball of mud going on.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
In OO apps, it's common to have an Application class which holds your top-level managers or whatever. This makes it hard to avoid having a theApp global object, or using a Singleton or something.
Does it matter? Should we seek to make this as OO as possible, or accept that OO principles break down sometimes?
Does it matter? Should we seek to make this as OO as possible, or accept that OO principles break down sometimes?
Sometimes OO theory meets the real world, and you should strive to make things work for your customers first and foremost. A singleton here or there is not a problem for your customer, or for maintainability.
OOP is mainly there to make code easier to maintain, reuse and understand. As long as those goals are met, there is no need to refactor for purity reasons only.
Having a global theApp object singleton doesn't necessarily violate OO principles, so long as data tied to it is properly encapsulated and whatnot.
There's also the situation that few OS's actually have an OO core, meaning that the Application Loader isn't Object Oriented to begin with.
In any case, absolutism on this point is dangerous; some programming languages have an (IMO) overly zealous approach to the whole thing, dictating every function be a method or the like, even when this doesn't make a lick of sense. (System.Math.sin(x), anyone?)
The most effective approach is usually mixing the two methodologies, using functions for functions, and methods for methods; and by extension, using Singletons for things that truly are singular; such as the application object or interfaces to some system services.
Edit: On System.Math.sin(x), it should be made clear that sin(x) is a function in quite literally every sense of the word, and putting it as a method on a singleton is wildly irresponsible, or at least a bit silly. In the comments a case could exist where another class wanted to use the name sin() for a method, but as methods and functions reside in separate namespaces in any case, this really isn't relevant.
I think the goal should be to design as well as possible. I don't want to have a mindset of seek "badges" or stamps of approval, so I'm not interested being as as "OO as possible", rather I seek to make concious trades-off. We favour concepts such as de-coupling and single-responsibility not because it's OO or because we can claim to be using a Design Pattern, but because we increase the ease of development, maintainability, testability and so on. We also favour simplicity because that too increases mainatainability and testability - the "You ain't gonna need it" principle leads us sometimes to leave things a little mre tightly coupled because we don't need flexibility right now.
So, to consider you example, there may well be a singleton in the sense that yes, there is only one of something (a thread pool or some such) but does the code using it need to know that it's a singleton? With a bit of care, use of factories or injection we can limit the knowledge of the n-gleton-ness.
There is no breakdown of OO by having a "theApp" object.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
When designing a new system or getting your head around someone else's code, what are some tell tale signs that something has gone wrong in the design phase? Are there clues to look for on class diagrams and inheritance hierarchies or even in the code itself that just scream for a design overhaul, particularly early in a project?
The things that mostly stick out for me are "code smells".
Mostly I'm sensitive to things that go against "good practice".
Things like:
Methods that do things other than what you'd think from the name (eg: FileExists() that silently deletes zero byte files)
A few extremely long methods (sign of an object wrapper around a procedure)
Repeated use of switch/case statements on the same enumerated member (sign of sub-classes needing extraction)
Lots of member variables that are used for processing, not to capture state (might indicate need to extract a method object)
A class that has lots of responsibilities (violation of Single Repsonsibility principle)
Long chains of member access (this.that is fine, this.that.theOther is fine, but my.very.long.chain.of.member.accesses.for.a.result is brittle)
Poor naming of classes
Use of too many design patterns in a small space
Working too hard (rewriting functions already present in the framework, or elsewhere in the same project)
Poor spelling (anywhere) and grammar (in comments), or comments that are simply misleading
I'd say the number one rule of poor OO design (and yes I've been guilty of it too many times!) is:
Classes that break the Single
Responsibility Principle (SRP) and
perform too many actions
Followed by:
Too much inheritance instead of
composition, i.e. Classes that
derive from a sub-type purely so
they get functionality for free.
Favour Composition over Inheritance.
Impossible to unit test properly.
Anti-patterns
Software design anti-patterns
Abstraction inversion : Not exposing implemented functionality required by users, so that they re-implement it using higher level functions
Ambiguous viewpoint: Presenting a model (usually OOAD) without specifying its viewpoint
Big ball of mud: A system with no recognizable structure
Blob: Generalization of God object from object-oriented design
Gas factory: An unnecessarily complex design
Input kludge: Failing to specify and implement handling of possibly invalid input
Interface bloat: Making an interface so powerful that it is extremely difficult to implement
Magic pushbutton: Coding implementation logic directly within interface code, without using abstraction.
Race hazard: Failing to see the consequence of different orders of events
Railroaded solution: A proposed solution that while poor, is the only one available due to poor foresight and inflexibility in other areas of the design
Re-coupling: Introducing unnecessary object dependency
Stovepipe system: A barely maintainable assemblage of ill-related components
Staralised schema: A database schema containing dual purpose tables for normalised and datamart use
Object-oriented design anti-patterns
Anemic Domain Model: The use of domain model without any business logic which is not OOP because each object should have both attributes and behaviors
BaseBean: Inheriting functionality from a utility class rather than delegating to it
Call super: Requiring subclasses to call a superclass's overridden method
Circle-ellipse problem: Subtyping variable-types on the basis of value-subtypes
Empty subclass failure: Creating a class that fails the "Empty Subclass Test" by behaving differently from a class derived from it without modifications
God object: Concentrating too many functions in a single part of the design (class)
Object cesspool: Reusing objects whose state does not conform to the (possibly implicit) contract for re-use
Object orgy: Failing to properly encapsulate objects permitting unrestricted access to their internals
Poltergeists: Objects whose sole purpose is to pass information to another object
Sequential coupling: A class that requires its methods to be called in a particular order
Singletonitis: The overuse of the singleton pattern
Yet Another Useless Layer: Adding unnecessary layers to a program, library or framework. This became popular after the first book on programming patterns.
Yo-yo problem: A structure (e.g., of inheritance) that is hard to understand due to excessive fragmentation
This question makes the assumption that object-oriented means good design. There are cases where another approach is much more appropriate.
One smell is objects having hard dependencies/references to other objects that aren't a part of their natural object hierarchy or domain related composition.
Example: Say you have a city simulation. If the a Person object has a NearestPostOffice property you are probably in trouble.
One thing I hate to see is a base class down-casting itself to a derived class. When you see this, you know you have problems.
Other examples might be:
Excessive use of switch statements
Derived classes that override everything
In my view, all OOP code degenerates to procedural code over a sufficiently long time span.
Granted, if you read my most recent question, you might understand why I am a little jaded.
The key problem with OOP is that it doesn't make it obvious that your object construction graph should be independent of your call graph.
Once you fix that problem, OOP actually starts to make sense. The problem is that very few teams are aware of this design pattern.
Here's a few:
Circular dependencies
You with property XYZ of a base class wasn't protected/private
You wish your language supported multiple inheritance
Within a long method, sections surrounded with #region / #endregion - in almost every case I've seen, that code could easily be extracted into a new method OR needed to be refactored in some way.
Overly-complicated inheritance trees, where the sub-classes do very different things and are only tangentially related to one another.
Violation of DRY - sub-classes that each override a base method in almost exactly the same way, with only a minor variation. An example: I recently worked on some code where the subclasses each overrode a base method and where the only difference was a type test ("x is ThisType" vs "x is ThatType"). I implemented a method in the base that took a generic type T, that it then used in the test. Each child could then call the base implementation, passing the type it wanted to test against. This trimmed about 30 lines of code from each of 8 different child classes.
Duplicate code = Code that does the same thing...I think in my experience this is the biggest mistake that can occur in OO design.
Objects are good create a gazillion of them is a bad OO design.
Having all you objects inherit some base utility class just so you can call your utility methods without having to type so much code.
Find a programmer who is experienced with the code base. Ask them to explain how something works.
If they say "this function calls that function", their code is procedural.
If they say "this class interacts with that class", their code is OO.
Following are most prominent features of a bad design:
Rigidity
Fragility
Immobility
Take a look at The Dependency Inversion Principle
When you don't just have a Money\Amount class but a TrainerPrice class, TablePrice class, AddTablePriceAction class and so on.
IDE Driven Development or Auto-Complete development. Combined with extreme strict typing is a perfect storm.
This is where you see what could be a lot of what could be variable values become class names and method names as well as the gratuitous use of classes in general. You'll also see things like all primitives becoming objects. All literals as classes. Function parameters as classes. Then conversion methods everywhere. You'll also see things like a class wrapping another delivering a subset of methods to another class inclusive of only the ones it needs at present.
This creates the possibility to generate an near infinite amount of code which is great if you have billable hours. When variables, contexts, properties and states get unrolled into hyper explicit and overly specific classes then this creates an exponential cataclysm as sooner or later those things multiply. Think of it like [a, b] x [x, y]. This can be further compounded by an attempt to create a full fluent interface as well as adhere to as many design patterns as possible.
OOP languages are not as polymorphic as some loosely typed languages. Loosely typed languages often offer runtime polymorphism in shallow syntax that static analysis can't handle.
In OOP you might see forms of repetition hard to automatically detect that could be turned into more dynamic code using maps. Although such languages are less dynamic you can achieve dynamic features with some extra-work.
The trade of here is that you save thousands (or millions) of lines of code while potentially loosing IDE features and static analysis. Performance can go either way. Run time polymorphism can often be converted to generated code. However in some cases the space is so huge that anything other than runtime polymorphism is impossible.
Problems are a lot more common with OOP languages lacking generics and when OOP programmers try to strictly type dynamic loosely typed language.
What happens without generics is where you should have A for X = [Q, W, E] and Y = [R, T, Y] you instead see [AQR, AQT, AQY, AWR, AWT, AWY, AER, AET, AEY]. This is often due to fear or using typeless or passing the type as a variable for loosing IDE support.
Traditionally loosely typed languages are made with a text editor rather than an IDE and the advantage lost through IDE support is often gained in other ways such as organising and structuring code such that it is navigable.
Often IDEs can be configured to understand your dynamic code (and link into it) but few properly support it in a convenient manner.
Hint: The context here is OOP gone horrifically wrong in PHP where people using simple OOP Java programming traditionally have tried to apply that to PHP which even with some OOP support is a fundamentally different type of language.
Designing against your platform to try to turn it into one your used to, designing to cater to an IDE or other tools, designing to cater to supporting Unit Tests, etc should all ring alarm bells because it's a significant deviation away from designing working software to solve a given category of problems or a given feature set.