Class conforming to too many protocols? [closed] - objective-c

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
So I've been building an app for a while now and over time one of my classes (a UIViewController) has grown quite large, doing quite a lot of stuff with one view. As such, I've had to make it conform to a lot of protocols (12 now). I'm now starting to worry that I'm conforming to a few too many.
Is there a problem with conforming to a lot of protocols in one class? Will there be a performance hit or anything like that? I'm not looking for personal opinions, but rather things like possible performance issues or if it's against a best-practices/style guide document (such as one distributed by Apple).
Some of the protocols my class conforms to (excluding a few custom protocols):
UIAlertViewDelegate, UIActionSheetDelegate, MFMailComposeViewControllerDelegate,
UITableViewDataSource, UITableViewDelegate, UITextFieldDelegate,
UIPopoverControllerDelegate, UIBarPositioningDelegate,
ABPeoplePickerNavigationControllerDelegate
Edit:
As noted by a few of the answers, the majority of the protocols are UI related - and the view itself is not cluttered. UIAlertViewDelegate, UIActionSheetDelegate, MFMailComposeViewControllerDelegate, and ABPeoplePickerNavigationControllerDelegate are all displayed separately (modally in the last two cases). UITableViewDataSource and UITableViewDelegate are simply to handle a table. UITextFieldDelegate is used to manage searching for the field (in order to allow me to check if the entered text is 'valid'). UIBarPositioningDelegate is required for iOS7 for appearance.
The class isn't overly large and only handles itself, without too much coupling (at least, in my opinion).

From official apple documentation:
Tip: If you find yourself adopting a large number of protocols in a
class, it may be a sign that you need to refactor an overly-complex
class by splitting the necessary behavior across multiple smaller
classes, each with clearly-defined responsibilities. One relatively
common pitfall for new OS X and iOS developers is to use a single
application delegate class to contain the majority of an application’s
functionality (managing underlying data structures, serving the data
to multiple user interface elements, as well as responding to gestures
and other user interaction). As complexity increases, the class
becomes more difficult to maintain.
So the answer is that it is not bad to do so in terms of performance, but it is not a good practice.
https://developer.apple.com/library/ios/documentation/cocoa/conceptual/ProgrammingWithObjectiveC/WorkingwithProtocols/WorkingwithProtocols.html

There is not penalty in having a class conforming to vary many protocols -- apart from negligible higher compilation time/space (due to having bigger compilation tables for that class, but this is really not meaningful, IMO) and possibly a class readability issue.
Indeed, when you specify that a class conform to a protocol you are simply telling the compiler to add all the methods declared inside of that protocol to the class interface. This has practically no impact either on performance or memory occupation.
The main issue you have is, IMO, about class readability. As in general you have an issue when your class offers too many public methods. This makes difficult to understand what the class is for. Refactoring would be strongly suggested, but again, this has nothing to do with performance.
From what you pasted, it seems that the protocols you are adding to your class seemingly relate to UI elements that you are managing inside of that class. Then, either the view you class manages is overly complex, or you need those delegates.

Best practice for programming in general is to separate concerns and functions as much as possible. To me, it looks like your app consists of one crowded controller. Why is that? Too much code in one class or file makes it harder to debug and maintain, plus your interface must be really cluttered if you have all those elements in them. I can't give specific advice without seeing more code, but in general, you have too many thing in one controller.

I don't think this cause any performance penality‎ or error.
But it seems that you class is very coupled, which is a bad design practice, but that's not necessary true, how much line of code you class have?
If that's a lot and you fell that you class is doing some work that it shouldn't you can refractor your code.

I cannot tell you about performance without actually seeing code, but its possible that if somehow the two or more protocols declare identically named methods (it has happened to me) then you will get unexpected behavior which could include degraded performance. Suppose that two protocols define a basic method such as - (void)update and now you are only implementing one update method, but both protocols want to use it so at best you are getting too many calls to update (degraded performance). if one of the protocols defines the method as #optional then its not always immediately obvious something bad is happening.
What I can tell you is that this kind of monolithic design is universally considered poor. It sound like you have a big ball of mud going on.

Related

"Huge class files are bad" — is it really and what is the best solution?

In a code review, I heard it is bad to create huge classes with a lot lines of code. Apparently the 1000 rules of code I had was terrible practice in terms of readability/navigability, I do hear some sense in that.
So I have some complex classes which are basically the code logic behind different screens. I'm programming on Android so this is for example for a Fragment or Activity (though this is a generic question).
Now I can choose to group methods and put them in Utility classes. This will at the very least shorten my code in the sense of lines per class and put some feeling about what method is where. The methods though, are really only used by no more than 1 class, so should this really be a utility class? My gut feeling says utility classes should be stateless classes that contain static methods usable for classes.
Now I could also go for collapsible code blocks and group my methods in them. This will provide readability and usability for me, but not for other programmers.
Then if I look at for instance the Android Activity class, it contains over 6000 lines of code. Is this considered bad practice as well?
I realize this question might be too much "opinion based" but I hope there is a clear and common answer to it.
Lines of code is a somewhat meaningless measure in my opinion. Certain types of classes are going to be naturally longer - for example MVC controllers.
The most important principal to keep in mind when designing a class is the single responsibility principle, which states:
every class should have a single responsibility, and that
responsibility should be entirely encapsulated by the class
The Activity type in Android could well follow this principal and still be 6000 lines long if, for example, an activity is a very complex thing which requires lots of horrible nested control and flow statements.
Without seeing the class it's difficult to say, however, in practice, it's unlikely that a well designed single class would grow to this size.

Why encapsulation is an important feature of OOP languages? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I came across different interview where question was asked to me why encapsulation is used? Whose requirement actually is encapsulation? Is it for users of program? Or is it for co-workers? Or is it to protect code from hackers?
Encapsulation helps in isolating implementation details from the behavior exposed to clients of a class (other classes/functions that are using this class), and gives you more control over coupling in your code. Consider this example, similar to the one in Robert Martin's book Clean Code:
public class Car
{
//...
public float GetFuelPercentage() { /* ... */ };
//...
private float gasoline;
//...
}
Note that the client using the function which gives you the amount of fuel in the car doesn't care what type of fuel does the car use. This abstraction separates the concern (Amount of fuel) from unimportant (in this context) detail: whether it is gas, oil or anything else.
The second thing is that author of the class is free to do anything they want with the internals of the class, for example changing gasoline to oil, and other things, as long as they don't change its behaviour. This is thanks to the fact, that they can be sure that no one depends on these details, because they are private. The fewer dependencies there are in the code the more flexible and easier to maintain it is.
One other thing, correctly noted in the underrated answer by utnapistim: low coupling also helps in testing the code, and maintaining those tests. The less complicated class's interface is, the easier to test it. Without encapsulation, with everything exposed it would be hard to comprehend what to test and how.
To reiterate some discussions in the comments:
No, encapsulation is not the most important thing in OOP. I'd dare even to say that it's not very important. Important things are these encouraged by encapsulation - like loose coupling. But it is not essential - a careful developer can maintain loose coupling without encapsulating variables etc. As pointed out by vlastachu, Python is a good example of a language which does not have mechanisms to enforce encapsulation, yet it is still feasible for OOP.
No, hiding your fields behind accessors is not encapsulation. If the only thing you've done is write "private" in front of variables and then mindlessly provide get/set pair for each of them, then in fact they are not encapsulated. Someone in a distant place in code can still meddle with internals of your class, and can still depend on them (well, it is of course a bit better that they depend on a method, not on a field).
No, encapsulation's primary goal is not to avoid mistakes. Primary goals are at least similar to those listed above, and thinking that encapsulation will defend you from making mistakes is naive. There are just lots of other ways to make a mistake beside altering a private variable. And altering a private variable is not so hard to find and fix. Again - Python is a good example for sake of this argument, as it can have encapsulation without enforcing it.
Encapsulation prevents people who work on your code from making mistakes, by making sure that they only access things they're supposed to access.
At least in most OO languages, encapsulation is roughly equivalent to the lock on the door of a bathroom.
It's not intended to keep anybody out if they really insist on entering.
It is intended as a courtesy to let people know that entering will lead mostly to:
embarrassment, and
a stinking mess.
Encapsulation allows you to formalize your interfaces, separating levels of abstraction (i.e. "application logic accesses IO code only in this and this way").
This in turn, allows you to change the implementation of a module (the data and algorithms inside the module) without changing the interface (and affecting client code).
This ability to modify modules independently of each other, improves your ability to measure your performance and make predictions in a project's deadlines.
It also allows you to test modules separately and reuse them in other projects (because encapsulation also lowers inter-dependencies and improves modularity of your code).
Not enforcing encapsulation tends to lead to a project's failure (the problem grows with the complexity of the project).
I architected a fast-track project. Encapsulation reduces the propagation of change through the system. Changes cost time and money.
When code is not encapsulated, a person has to search many files to find where to make the change(s). Adversely, there is the question of "Did I find all the places?" and the other point "what effect to the entire system to do all of these scatter changes have?"
I'm working on an embedded medical device and quality is imperative. Also, all changes must be documented, reviewed, unit tested and finally a system test performed. By using encapsulation, we can reduce the number of changes and their locality, reducing the number of files that must be retested.

Why subclassing [duplicate]

After reading lots of blogs, forum entries and several Apple docs, I still don't know whether extensive subclassing in Objective-C is a wise thing to do or not.
Take for example the following case:
Say I'm developing a puzzle game which
has a lot of elements. All of those
elements share a certain amount of the
same behaviour. Then, within my
collection of elements, different
groups of elements share equal
behaviour, distinguishing groups from
groups, etc...
So, after determining what inherits
from what, I decided to subclass out
of oblivion. And why shouldn't I?
Considering the ease tweaking general
behaviour takes with this model, I
think I accomplished something OOP is
meant for.
But, - and this is the source of my question - Apple mentions using delegates, data source methods, and informal protocols in favour of subclassing. It really boggles my mind why?
There seem to be two camps. Those in favor of subclassing, those in fafor of not. It depends on personal taste apparently. I'm wondering what the pros and cons are of subclassing massively and not subclassing massively?
To wrap it up, my question is simple: Am I right? And why or why not?
Delegation is a means of using the composition technique to replace some aspects of coding you would otherwise subclass for. As such, it boils down to the age old question of the task at hand needing one large thing that knows how to do a lot, or if you have a loose network of specialized objects (a very UNIX sort of model of responsibility).
Using a combination of delegates and protocols (to define what the delegates are supposed to be able to do) provides a great deal of flexibility of behavior and ease of coding - going back to that Liskov substitution principle, when you subclass you have to be careful you don't do anything a user of the whole class would find unexpected. But if you are simply making a delegate object then you have much less to be responsible for, only that the delegate methods you implement do what that one protocol calls for, beyond that you don't care.
There are still many good reasons to use subclasses, if you truly have shared behavior and variables between a number of classes it may make a lot of sense to subclass. But if you can take advantage of the delegate concept you'll often make your classes easier to extend or use in ways you the designer may not have expected.
I tend to be more of a fan of formal protocols than informal ones, because not only do formal protocols make sure you have the methods a class treating you as a delegate expect, but also because the protocol definition is a natural place to document what you expect from a delegate that implements those methods.
Personally, I follow this rule: I can create a subclass if it respects the Liskov substitution principle.
Subclassing has it's benefits, but it also has some drawbacks. As a general rule, I try to avoid implementation inheritance and instead use interface inheritance and delegation.
One of the reasons I do this is because when you inherit implementation, you can wind up with problems if you override methods but don't adhere to their (sometimes undocumented contract). Additionally, I find walking class hierarchies with implementation inheritance difficult because methods can be overridden or implemented at any level. Finally, when subclassing you can only widen an interface, you can't narrow it. This leads to leaky abstractions. A good example of this is java.util.Stack which extends java.util.Vector. I shouldn't be able to treat a stack as a Vector. Doing so only allows the consumer to run around the interface.
Others have mentioned the Liskov Substitution Principle. I think that using that would have certainly cleared up the java.util.Stack problem but it can also lead to very deep class hierarchies in order to put ensure that classes get only the methods they are supposed to have.
Instead, with interface inheritance there is essentially no class hierarchy because interfaces rarely need to extend one another. The classes simply implement the interfaces that they need to and can therefore be treated in the correct way by the consumer. Additionally, because there is no implementation inheritance, consumers of these classes won't infer their behavior due to previous experience with a parent class.
In the end though, it doesn't really matter which way you go. Both are perfectly acceptable. It's really more a matter of what you're more comfortable with and what the frameworks that you're working with encourage. As the old saying goes: "When in Rome do as Romans do."
There's nothing wrong with using inheritance in Objective-C. Apple uses it quite a bit. For instance, in Cocoa-touch, the inheritance tree of UIButton is UIControl : UIView : UIResponder : NSObject.
I think Martin hit on an important point in mentioning the Liskov substitution principle. Also, proper use of inheritance requires that the implementer of the subclass has a deep knowledge of the super class. If you've ever struggled to extend a non-trivial class in a complex framework, you know that there's always a learning curve. In addition, implementation details of the super class often "leak through" to the subclass, which is a big pain in the #$& for framework builders.
Apple chose to use delegation in many instances to address these problems; non-trivial classes like UIApplication expose common extension points through a delegate object so most developers have both an easier learning curve and a more loosely coupled way to add application specific behavior -- extending UIApplication directly is rarely necessary.
In your case, for your application specific code, use which ever techniques you're comfortable with and work best for your design. Inheritance is a great tool when used appropriately.
I frequently see application programmers draw lessons from framework designs and trying to apply them to their application code (this is common in Java, C++ and Python worlds as well as Objective-C). While it's good to think about and understand the choices framework designers made, those lessons don't always apply to application code.
In general you should avoid subclassing API classes if there exist delegates, etc that accomplish what you want to do. In your own code subclassing is often nicer, but it really does depend on your goals, eg. if you're providing an API you should provide a delegate based API rather than assuming subclassing.
When dealing with APIs subclassing has more potential bugs -- eg. if any class in the class hierarchy gets a new method that has the same name as your addition you make break stuff. And also, if you're providing a useful/helper type function there's a chance that in the future something similar will be added to the actual class you were subclassing, and that might be more efficient, etc but your override will hide it.
Please read the Apple documentation "Adding behavior to a Cocoa program"!. Under "Inheriting from a Cocoa class" section, see the 2nd paragraph. Apple clearly mentions that Subclassing is the primary way of adding application specific behavior to the framework (please note, FRAMEWORK).
MVC pattern does not completely disallow the use of subclasses or subtypes. Atleast I have not seen this recommendation from either Apple or others (if I have missed please feel free to point me to the right source of information about this). If you are subclassing api classes only within your application, please go ahead, no one's stopping you but do take care that it does not break the behavior of the class/api as a whole. Subclassing is great way of extending the framework api's functionality. We see a lot of subclassing within the Apple IOS framework APIs too.
As a developer one has to take care the implementation is well documented and not duplicated accidentally by another developer. Its another ball game altogether if your application is a set of API classes that you plan to distribute as reusable component.
IMHO, rather than asking around what the best practice is, first read the related documentation thoroughly, implement and test it. Make your own judgement. You know best about what you're up to.
It's easy for others (like me and so many others) to just read stuff from different sources on the Net and throw around terms. Be your own judge, it has worked for me so far.
I really think it depends on what you're trying to do. If the puzzle game you describe in the example really does have a set of unique elements that share common attributes, and there's no provided classes - say, for example, "NSPuzzlePiece" - that fit your needs, then I don't see a problem with subclassing extensively.
In my experience, delegates, data source methods, and informal protocols are much more useful when Apple has provided a class that already does something close to what you want it to do.
For example, say you're building an app that uses a table. There is (and I speak here of the iPhone SDK, since that's where I have experience) a class UITableView that does all the little niceties of creating a table for interaction with the user, and it's much more efficient to define a data source for an instance of UITableView than it is to completely subclass UITableView and redefine or extend its methods to customize its behavior.
Similar concepts go for delegates and protocols. If you can fit your ideas into Apple's classes, then it's usually easier (and will work more smoothly) to do so and use data source, delegates, and protocols than it is to create your own subclasses. It helps you avoid extra work and wasting time, and is usually less error-prone. Apple's classes have taken care of the business of making functions efficient and debugging; the more you can work with them, the fewer mistakes your program will have in the long run.
my impression of ADC's emphasis 'against' subclassing has more to do with the legacy of how the operating system has evolved... back in the day (Mac Classic aka os9) when c++ was the primary interface to most of the mac toolbox, subclassing was the de-facto standard in order for a programmer to modify the behaviour of commonplace OS features (and this was indeed sometimes a pain in the neck and meant that one had to be very careful that any and all modifications behaved predictably and didn't break any standard behaviour).
this being said, MY IMPRESSION of ADC's emphasis against subclassing is not putting forth a case for designing an application's class hierarchy without inheritance, BUT INSTEAD to point out that in the new way of doing things (ie OSX) there are in most cases more appropriate means to go about customizing standard behavior without needing to subclass.
So, by all means, design your puzzle program's architecture as robustly as you can, leveraging inheritance as you see fit!
looking forward to seeing your cool new puzzle application!
|K<
Apple indeed appears to passively discourage subclassing with Objective-C.
It is an axiom of OOP design to Favor composition over implementation.

Is subclassing in Objective-C a bad practice?

After reading lots of blogs, forum entries and several Apple docs, I still don't know whether extensive subclassing in Objective-C is a wise thing to do or not.
Take for example the following case:
Say I'm developing a puzzle game which
has a lot of elements. All of those
elements share a certain amount of the
same behaviour. Then, within my
collection of elements, different
groups of elements share equal
behaviour, distinguishing groups from
groups, etc...
So, after determining what inherits
from what, I decided to subclass out
of oblivion. And why shouldn't I?
Considering the ease tweaking general
behaviour takes with this model, I
think I accomplished something OOP is
meant for.
But, - and this is the source of my question - Apple mentions using delegates, data source methods, and informal protocols in favour of subclassing. It really boggles my mind why?
There seem to be two camps. Those in favor of subclassing, those in fafor of not. It depends on personal taste apparently. I'm wondering what the pros and cons are of subclassing massively and not subclassing massively?
To wrap it up, my question is simple: Am I right? And why or why not?
Delegation is a means of using the composition technique to replace some aspects of coding you would otherwise subclass for. As such, it boils down to the age old question of the task at hand needing one large thing that knows how to do a lot, or if you have a loose network of specialized objects (a very UNIX sort of model of responsibility).
Using a combination of delegates and protocols (to define what the delegates are supposed to be able to do) provides a great deal of flexibility of behavior and ease of coding - going back to that Liskov substitution principle, when you subclass you have to be careful you don't do anything a user of the whole class would find unexpected. But if you are simply making a delegate object then you have much less to be responsible for, only that the delegate methods you implement do what that one protocol calls for, beyond that you don't care.
There are still many good reasons to use subclasses, if you truly have shared behavior and variables between a number of classes it may make a lot of sense to subclass. But if you can take advantage of the delegate concept you'll often make your classes easier to extend or use in ways you the designer may not have expected.
I tend to be more of a fan of formal protocols than informal ones, because not only do formal protocols make sure you have the methods a class treating you as a delegate expect, but also because the protocol definition is a natural place to document what you expect from a delegate that implements those methods.
Personally, I follow this rule: I can create a subclass if it respects the Liskov substitution principle.
Subclassing has it's benefits, but it also has some drawbacks. As a general rule, I try to avoid implementation inheritance and instead use interface inheritance and delegation.
One of the reasons I do this is because when you inherit implementation, you can wind up with problems if you override methods but don't adhere to their (sometimes undocumented contract). Additionally, I find walking class hierarchies with implementation inheritance difficult because methods can be overridden or implemented at any level. Finally, when subclassing you can only widen an interface, you can't narrow it. This leads to leaky abstractions. A good example of this is java.util.Stack which extends java.util.Vector. I shouldn't be able to treat a stack as a Vector. Doing so only allows the consumer to run around the interface.
Others have mentioned the Liskov Substitution Principle. I think that using that would have certainly cleared up the java.util.Stack problem but it can also lead to very deep class hierarchies in order to put ensure that classes get only the methods they are supposed to have.
Instead, with interface inheritance there is essentially no class hierarchy because interfaces rarely need to extend one another. The classes simply implement the interfaces that they need to and can therefore be treated in the correct way by the consumer. Additionally, because there is no implementation inheritance, consumers of these classes won't infer their behavior due to previous experience with a parent class.
In the end though, it doesn't really matter which way you go. Both are perfectly acceptable. It's really more a matter of what you're more comfortable with and what the frameworks that you're working with encourage. As the old saying goes: "When in Rome do as Romans do."
There's nothing wrong with using inheritance in Objective-C. Apple uses it quite a bit. For instance, in Cocoa-touch, the inheritance tree of UIButton is UIControl : UIView : UIResponder : NSObject.
I think Martin hit on an important point in mentioning the Liskov substitution principle. Also, proper use of inheritance requires that the implementer of the subclass has a deep knowledge of the super class. If you've ever struggled to extend a non-trivial class in a complex framework, you know that there's always a learning curve. In addition, implementation details of the super class often "leak through" to the subclass, which is a big pain in the #$& for framework builders.
Apple chose to use delegation in many instances to address these problems; non-trivial classes like UIApplication expose common extension points through a delegate object so most developers have both an easier learning curve and a more loosely coupled way to add application specific behavior -- extending UIApplication directly is rarely necessary.
In your case, for your application specific code, use which ever techniques you're comfortable with and work best for your design. Inheritance is a great tool when used appropriately.
I frequently see application programmers draw lessons from framework designs and trying to apply them to their application code (this is common in Java, C++ and Python worlds as well as Objective-C). While it's good to think about and understand the choices framework designers made, those lessons don't always apply to application code.
In general you should avoid subclassing API classes if there exist delegates, etc that accomplish what you want to do. In your own code subclassing is often nicer, but it really does depend on your goals, eg. if you're providing an API you should provide a delegate based API rather than assuming subclassing.
When dealing with APIs subclassing has more potential bugs -- eg. if any class in the class hierarchy gets a new method that has the same name as your addition you make break stuff. And also, if you're providing a useful/helper type function there's a chance that in the future something similar will be added to the actual class you were subclassing, and that might be more efficient, etc but your override will hide it.
Please read the Apple documentation "Adding behavior to a Cocoa program"!. Under "Inheriting from a Cocoa class" section, see the 2nd paragraph. Apple clearly mentions that Subclassing is the primary way of adding application specific behavior to the framework (please note, FRAMEWORK).
MVC pattern does not completely disallow the use of subclasses or subtypes. Atleast I have not seen this recommendation from either Apple or others (if I have missed please feel free to point me to the right source of information about this). If you are subclassing api classes only within your application, please go ahead, no one's stopping you but do take care that it does not break the behavior of the class/api as a whole. Subclassing is great way of extending the framework api's functionality. We see a lot of subclassing within the Apple IOS framework APIs too.
As a developer one has to take care the implementation is well documented and not duplicated accidentally by another developer. Its another ball game altogether if your application is a set of API classes that you plan to distribute as reusable component.
IMHO, rather than asking around what the best practice is, first read the related documentation thoroughly, implement and test it. Make your own judgement. You know best about what you're up to.
It's easy for others (like me and so many others) to just read stuff from different sources on the Net and throw around terms. Be your own judge, it has worked for me so far.
I really think it depends on what you're trying to do. If the puzzle game you describe in the example really does have a set of unique elements that share common attributes, and there's no provided classes - say, for example, "NSPuzzlePiece" - that fit your needs, then I don't see a problem with subclassing extensively.
In my experience, delegates, data source methods, and informal protocols are much more useful when Apple has provided a class that already does something close to what you want it to do.
For example, say you're building an app that uses a table. There is (and I speak here of the iPhone SDK, since that's where I have experience) a class UITableView that does all the little niceties of creating a table for interaction with the user, and it's much more efficient to define a data source for an instance of UITableView than it is to completely subclass UITableView and redefine or extend its methods to customize its behavior.
Similar concepts go for delegates and protocols. If you can fit your ideas into Apple's classes, then it's usually easier (and will work more smoothly) to do so and use data source, delegates, and protocols than it is to create your own subclasses. It helps you avoid extra work and wasting time, and is usually less error-prone. Apple's classes have taken care of the business of making functions efficient and debugging; the more you can work with them, the fewer mistakes your program will have in the long run.
my impression of ADC's emphasis 'against' subclassing has more to do with the legacy of how the operating system has evolved... back in the day (Mac Classic aka os9) when c++ was the primary interface to most of the mac toolbox, subclassing was the de-facto standard in order for a programmer to modify the behaviour of commonplace OS features (and this was indeed sometimes a pain in the neck and meant that one had to be very careful that any and all modifications behaved predictably and didn't break any standard behaviour).
this being said, MY IMPRESSION of ADC's emphasis against subclassing is not putting forth a case for designing an application's class hierarchy without inheritance, BUT INSTEAD to point out that in the new way of doing things (ie OSX) there are in most cases more appropriate means to go about customizing standard behavior without needing to subclass.
So, by all means, design your puzzle program's architecture as robustly as you can, leveraging inheritance as you see fit!
looking forward to seeing your cool new puzzle application!
|K<
Apple indeed appears to passively discourage subclassing with Objective-C.
It is an axiom of OOP design to Favor composition over implementation.

What are the tell-tale signs of bad object oriented design? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
When designing a new system or getting your head around someone else's code, what are some tell tale signs that something has gone wrong in the design phase? Are there clues to look for on class diagrams and inheritance hierarchies or even in the code itself that just scream for a design overhaul, particularly early in a project?
The things that mostly stick out for me are "code smells".
Mostly I'm sensitive to things that go against "good practice".
Things like:
Methods that do things other than what you'd think from the name (eg: FileExists() that silently deletes zero byte files)
A few extremely long methods (sign of an object wrapper around a procedure)
Repeated use of switch/case statements on the same enumerated member (sign of sub-classes needing extraction)
Lots of member variables that are used for processing, not to capture state (might indicate need to extract a method object)
A class that has lots of responsibilities (violation of Single Repsonsibility principle)
Long chains of member access (this.that is fine, this.that.theOther is fine, but my.very.long.chain.of.member.accesses.for.a.result is brittle)
Poor naming of classes
Use of too many design patterns in a small space
Working too hard (rewriting functions already present in the framework, or elsewhere in the same project)
Poor spelling (anywhere) and grammar (in comments), or comments that are simply misleading
I'd say the number one rule of poor OO design (and yes I've been guilty of it too many times!) is:
Classes that break the Single
Responsibility Principle (SRP) and
perform too many actions
Followed by:
Too much inheritance instead of
composition, i.e. Classes that
derive from a sub-type purely so
they get functionality for free.
Favour Composition over Inheritance.
Impossible to unit test properly.
Anti-patterns
Software design anti-patterns
Abstraction inversion : Not exposing implemented functionality required by users, so that they re-implement it using higher level functions
Ambiguous viewpoint: Presenting a model (usually OOAD) without specifying its viewpoint
Big ball of mud: A system with no recognizable structure
Blob: Generalization of God object from object-oriented design
Gas factory: An unnecessarily complex design
Input kludge: Failing to specify and implement handling of possibly invalid input
Interface bloat: Making an interface so powerful that it is extremely difficult to implement
Magic pushbutton: Coding implementation logic directly within interface code, without using abstraction.
Race hazard: Failing to see the consequence of different orders of events
Railroaded solution: A proposed solution that while poor, is the only one available due to poor foresight and inflexibility in other areas of the design
Re-coupling: Introducing unnecessary object dependency
Stovepipe system: A barely maintainable assemblage of ill-related components
Staralised schema: A database schema containing dual purpose tables for normalised and datamart use
Object-oriented design anti-patterns
Anemic Domain Model: The use of domain model without any business logic which is not OOP because each object should have both attributes and behaviors
BaseBean: Inheriting functionality from a utility class rather than delegating to it
Call super: Requiring subclasses to call a superclass's overridden method
Circle-ellipse problem: Subtyping variable-types on the basis of value-subtypes
Empty subclass failure: Creating a class that fails the "Empty Subclass Test" by behaving differently from a class derived from it without modifications
God object: Concentrating too many functions in a single part of the design (class)
Object cesspool: Reusing objects whose state does not conform to the (possibly implicit) contract for re-use
Object orgy: Failing to properly encapsulate objects permitting unrestricted access to their internals
Poltergeists: Objects whose sole purpose is to pass information to another object
Sequential coupling: A class that requires its methods to be called in a particular order
Singletonitis: The overuse of the singleton pattern
Yet Another Useless Layer: Adding unnecessary layers to a program, library or framework. This became popular after the first book on programming patterns.
Yo-yo problem: A structure (e.g., of inheritance) that is hard to understand due to excessive fragmentation
This question makes the assumption that object-oriented means good design. There are cases where another approach is much more appropriate.
One smell is objects having hard dependencies/references to other objects that aren't a part of their natural object hierarchy or domain related composition.
Example: Say you have a city simulation. If the a Person object has a NearestPostOffice property you are probably in trouble.
One thing I hate to see is a base class down-casting itself to a derived class. When you see this, you know you have problems.
Other examples might be:
Excessive use of switch statements
Derived classes that override everything
In my view, all OOP code degenerates to procedural code over a sufficiently long time span.
Granted, if you read my most recent question, you might understand why I am a little jaded.
The key problem with OOP is that it doesn't make it obvious that your object construction graph should be independent of your call graph.
Once you fix that problem, OOP actually starts to make sense. The problem is that very few teams are aware of this design pattern.
Here's a few:
Circular dependencies
You with property XYZ of a base class wasn't protected/private
You wish your language supported multiple inheritance
Within a long method, sections surrounded with #region / #endregion - in almost every case I've seen, that code could easily be extracted into a new method OR needed to be refactored in some way.
Overly-complicated inheritance trees, where the sub-classes do very different things and are only tangentially related to one another.
Violation of DRY - sub-classes that each override a base method in almost exactly the same way, with only a minor variation. An example: I recently worked on some code where the subclasses each overrode a base method and where the only difference was a type test ("x is ThisType" vs "x is ThatType"). I implemented a method in the base that took a generic type T, that it then used in the test. Each child could then call the base implementation, passing the type it wanted to test against. This trimmed about 30 lines of code from each of 8 different child classes.
Duplicate code = Code that does the same thing...I think in my experience this is the biggest mistake that can occur in OO design.
Objects are good create a gazillion of them is a bad OO design.
Having all you objects inherit some base utility class just so you can call your utility methods without having to type so much code.
Find a programmer who is experienced with the code base. Ask them to explain how something works.
If they say "this function calls that function", their code is procedural.
If they say "this class interacts with that class", their code is OO.
Following are most prominent features of a bad design:
Rigidity
Fragility
Immobility
Take a look at The Dependency Inversion Principle
When you don't just have a Money\Amount class but a TrainerPrice class, TablePrice class, AddTablePriceAction class and so on.
IDE Driven Development or Auto-Complete development. Combined with extreme strict typing is a perfect storm.
This is where you see what could be a lot of what could be variable values become class names and method names as well as the gratuitous use of classes in general. You'll also see things like all primitives becoming objects. All literals as classes. Function parameters as classes. Then conversion methods everywhere. You'll also see things like a class wrapping another delivering a subset of methods to another class inclusive of only the ones it needs at present.
This creates the possibility to generate an near infinite amount of code which is great if you have billable hours. When variables, contexts, properties and states get unrolled into hyper explicit and overly specific classes then this creates an exponential cataclysm as sooner or later those things multiply. Think of it like [a, b] x [x, y]. This can be further compounded by an attempt to create a full fluent interface as well as adhere to as many design patterns as possible.
OOP languages are not as polymorphic as some loosely typed languages. Loosely typed languages often offer runtime polymorphism in shallow syntax that static analysis can't handle.
In OOP you might see forms of repetition hard to automatically detect that could be turned into more dynamic code using maps. Although such languages are less dynamic you can achieve dynamic features with some extra-work.
The trade of here is that you save thousands (or millions) of lines of code while potentially loosing IDE features and static analysis. Performance can go either way. Run time polymorphism can often be converted to generated code. However in some cases the space is so huge that anything other than runtime polymorphism is impossible.
Problems are a lot more common with OOP languages lacking generics and when OOP programmers try to strictly type dynamic loosely typed language.
What happens without generics is where you should have A for X = [Q, W, E] and Y = [R, T, Y] you instead see [AQR, AQT, AQY, AWR, AWT, AWY, AER, AET, AEY]. This is often due to fear or using typeless or passing the type as a variable for loosing IDE support.
Traditionally loosely typed languages are made with a text editor rather than an IDE and the advantage lost through IDE support is often gained in other ways such as organising and structuring code such that it is navigable.
Often IDEs can be configured to understand your dynamic code (and link into it) but few properly support it in a convenient manner.
Hint: The context here is OOP gone horrifically wrong in PHP where people using simple OOP Java programming traditionally have tried to apply that to PHP which even with some OOP support is a fundamentally different type of language.
Designing against your platform to try to turn it into one your used to, designing to cater to an IDE or other tools, designing to cater to supporting Unit Tests, etc should all ring alarm bells because it's a significant deviation away from designing working software to solve a given category of problems or a given feature set.