Too much C-Style in Objective-C programs? - objective-c

Hi I'm writing this question because I'm a newbie in ObjC and a lot of doubts came to my mind when trying to make my fist training app. The thing is that I have a strong background in C, I've been programming in Java for the last year and I've done some collage stuff with Smalltalk (I mencione this because those are my programming references and those are the languages I'm comparing ObjC with).
The first problem I've encountered is that I don't know where to draw a line between ObjC and C, for example when dealing with math operations, Should I use math.h or there is a more "object-way" like you can do in Smalltalk (aNumber raisedTo: 3) ? How does a person with no background at all in C learns ObjC?.
Another thing that I couldn't find was a collection's protocol (I've looked over the Foundation Framework documentation given by Apple). Because I want to implement an expresion tree class and I wanna know if there are methods that all collections should implement (like in Smalltalk or Java) or I gotta check by hand every collection and see if there is a cool method that my new collection should have.
I don't know if I'm being too stupid or I'm searching for features that the language/framework doesn't have. I want to program in ObjC with the ObjC style not thinking in C, Java or Smalltalk.
Sorry if the question was too long.

Absolutely use <math.h>. You don't way to pay message sending overhead for functions that run in 30 cycles. Even function call overhead seems pretty steep at that point.
More generally, use as much or as little of C-style as you want to. I've seen Objective-C that was nothing but a couple C modules glued together with objective C messages, and I've seen Objective-C that essentially zero lines of code without the square brackets. I've seen beautiful, effective code written both ways. Good code is good code, however you write it.

In general, you'll use C features for numerical calculations. You'll generally use objects for most other things. The reason for this is that objects are way heavier than a simple scalar — there's just no benefit to it. Why would you ever write [[NSNumber numberWithInteger:1] numberByAddingNumber:[NSNumber numberWithInteger:2]] when you can just write 1+2? It's not only painful to read, it's far slower and it doesn't gain you anything.
On the other hand, Cocoa has rich object libraries for strings, arrays, networking and many other areas, and using those is a big win.
Knowing what's there — and thus what the easiest way to do something is — is just a matter of learning. If you think something should be there and you can't find it, you can ask either here or on Apple's Cocoa-Dev mailing list.
As for a collection protocol — there really isn't one. The closest thing to it is the NSFastEnumeration protocol, which defines precisely one method: countByEnumeratingWithState:objects:count:. This lets you use the for (id someObject in someCollection) syntax to enumerate the objects in a collection. Otherwise, all the collections define their own independent interfaces.

The first problem I've encountered is that I don't know where to draw a line between ObjC and C.
My rule is to use C wherever it makes sense to you. Objective-C has the benefit of letting you choose when to be procedural and when to be object-oriented. Go with what fits best with the code you're writing.
Another thing that I couldn't find was a collection's protocol [...] I want to implement an expresion tree class and I wanna know if there are methods that all collections should implement (like in Java) or I gotta check by hand every collection and see if there is a method that my collection should have.
Unlike Java, Objective-C does not have a master protocol for collections like the java.util.Collection interface. Also, there aren't a proliferation of specific container implementations as in Java. However, that gives you the freedom to implement a collection in a way that makes sense for your code.
For building a tree-like structure, you might take a look at NSTreeNode to see if it might be useful to leverage. (It may be more than you're need or want, but might be worth a shot.)
As far as rolling your own collection, I've learned a lot while creating CHDataStructures.framework, and you're welcome to use whatever you like from that code, or just look at my attempts at creating Cocoa-like structures, designed to complement the Foundation collections and operate similarly. Good luck!

Try to use each language for what it's good at. IMHO, this would include Obj-C objects but C-like code implementing methods. So use math.h and concise C code to implement logic, but don't be shy about using Obj-C classes to organize your larger blocks of functionality into something that makes sense.
Also, try to interact with the frameworks using their style so you're not running upstream.

As has been mentioned, there’s no real protocol for abstract collection classes (aside from the NSFastEnumeration protocol which provides the for(id item in collection) syntax when implemented), but there are conventions to follow.
Apple’s Introduction to Coding Guidelines for Cocoa covers some of this, and there is in fact a section on naming collection methods which covers the general cases (though note that generic container classes such as NSArray use the term “Object” as opposed to “Element” listed in the examples there – i.e. addObject:, removeObject:, and so on).
Following the patterns listed here (among others) is actually crucial when you want your classes to be KVC-compliant, which allows other users to observe changes in your object’s properties.

Related

Why subclassing [duplicate]

After reading lots of blogs, forum entries and several Apple docs, I still don't know whether extensive subclassing in Objective-C is a wise thing to do or not.
Take for example the following case:
Say I'm developing a puzzle game which
has a lot of elements. All of those
elements share a certain amount of the
same behaviour. Then, within my
collection of elements, different
groups of elements share equal
behaviour, distinguishing groups from
groups, etc...
So, after determining what inherits
from what, I decided to subclass out
of oblivion. And why shouldn't I?
Considering the ease tweaking general
behaviour takes with this model, I
think I accomplished something OOP is
meant for.
But, - and this is the source of my question - Apple mentions using delegates, data source methods, and informal protocols in favour of subclassing. It really boggles my mind why?
There seem to be two camps. Those in favor of subclassing, those in fafor of not. It depends on personal taste apparently. I'm wondering what the pros and cons are of subclassing massively and not subclassing massively?
To wrap it up, my question is simple: Am I right? And why or why not?
Delegation is a means of using the composition technique to replace some aspects of coding you would otherwise subclass for. As such, it boils down to the age old question of the task at hand needing one large thing that knows how to do a lot, or if you have a loose network of specialized objects (a very UNIX sort of model of responsibility).
Using a combination of delegates and protocols (to define what the delegates are supposed to be able to do) provides a great deal of flexibility of behavior and ease of coding - going back to that Liskov substitution principle, when you subclass you have to be careful you don't do anything a user of the whole class would find unexpected. But if you are simply making a delegate object then you have much less to be responsible for, only that the delegate methods you implement do what that one protocol calls for, beyond that you don't care.
There are still many good reasons to use subclasses, if you truly have shared behavior and variables between a number of classes it may make a lot of sense to subclass. But if you can take advantage of the delegate concept you'll often make your classes easier to extend or use in ways you the designer may not have expected.
I tend to be more of a fan of formal protocols than informal ones, because not only do formal protocols make sure you have the methods a class treating you as a delegate expect, but also because the protocol definition is a natural place to document what you expect from a delegate that implements those methods.
Personally, I follow this rule: I can create a subclass if it respects the Liskov substitution principle.
Subclassing has it's benefits, but it also has some drawbacks. As a general rule, I try to avoid implementation inheritance and instead use interface inheritance and delegation.
One of the reasons I do this is because when you inherit implementation, you can wind up with problems if you override methods but don't adhere to their (sometimes undocumented contract). Additionally, I find walking class hierarchies with implementation inheritance difficult because methods can be overridden or implemented at any level. Finally, when subclassing you can only widen an interface, you can't narrow it. This leads to leaky abstractions. A good example of this is java.util.Stack which extends java.util.Vector. I shouldn't be able to treat a stack as a Vector. Doing so only allows the consumer to run around the interface.
Others have mentioned the Liskov Substitution Principle. I think that using that would have certainly cleared up the java.util.Stack problem but it can also lead to very deep class hierarchies in order to put ensure that classes get only the methods they are supposed to have.
Instead, with interface inheritance there is essentially no class hierarchy because interfaces rarely need to extend one another. The classes simply implement the interfaces that they need to and can therefore be treated in the correct way by the consumer. Additionally, because there is no implementation inheritance, consumers of these classes won't infer their behavior due to previous experience with a parent class.
In the end though, it doesn't really matter which way you go. Both are perfectly acceptable. It's really more a matter of what you're more comfortable with and what the frameworks that you're working with encourage. As the old saying goes: "When in Rome do as Romans do."
There's nothing wrong with using inheritance in Objective-C. Apple uses it quite a bit. For instance, in Cocoa-touch, the inheritance tree of UIButton is UIControl : UIView : UIResponder : NSObject.
I think Martin hit on an important point in mentioning the Liskov substitution principle. Also, proper use of inheritance requires that the implementer of the subclass has a deep knowledge of the super class. If you've ever struggled to extend a non-trivial class in a complex framework, you know that there's always a learning curve. In addition, implementation details of the super class often "leak through" to the subclass, which is a big pain in the #$& for framework builders.
Apple chose to use delegation in many instances to address these problems; non-trivial classes like UIApplication expose common extension points through a delegate object so most developers have both an easier learning curve and a more loosely coupled way to add application specific behavior -- extending UIApplication directly is rarely necessary.
In your case, for your application specific code, use which ever techniques you're comfortable with and work best for your design. Inheritance is a great tool when used appropriately.
I frequently see application programmers draw lessons from framework designs and trying to apply them to their application code (this is common in Java, C++ and Python worlds as well as Objective-C). While it's good to think about and understand the choices framework designers made, those lessons don't always apply to application code.
In general you should avoid subclassing API classes if there exist delegates, etc that accomplish what you want to do. In your own code subclassing is often nicer, but it really does depend on your goals, eg. if you're providing an API you should provide a delegate based API rather than assuming subclassing.
When dealing with APIs subclassing has more potential bugs -- eg. if any class in the class hierarchy gets a new method that has the same name as your addition you make break stuff. And also, if you're providing a useful/helper type function there's a chance that in the future something similar will be added to the actual class you were subclassing, and that might be more efficient, etc but your override will hide it.
Please read the Apple documentation "Adding behavior to a Cocoa program"!. Under "Inheriting from a Cocoa class" section, see the 2nd paragraph. Apple clearly mentions that Subclassing is the primary way of adding application specific behavior to the framework (please note, FRAMEWORK).
MVC pattern does not completely disallow the use of subclasses or subtypes. Atleast I have not seen this recommendation from either Apple or others (if I have missed please feel free to point me to the right source of information about this). If you are subclassing api classes only within your application, please go ahead, no one's stopping you but do take care that it does not break the behavior of the class/api as a whole. Subclassing is great way of extending the framework api's functionality. We see a lot of subclassing within the Apple IOS framework APIs too.
As a developer one has to take care the implementation is well documented and not duplicated accidentally by another developer. Its another ball game altogether if your application is a set of API classes that you plan to distribute as reusable component.
IMHO, rather than asking around what the best practice is, first read the related documentation thoroughly, implement and test it. Make your own judgement. You know best about what you're up to.
It's easy for others (like me and so many others) to just read stuff from different sources on the Net and throw around terms. Be your own judge, it has worked for me so far.
I really think it depends on what you're trying to do. If the puzzle game you describe in the example really does have a set of unique elements that share common attributes, and there's no provided classes - say, for example, "NSPuzzlePiece" - that fit your needs, then I don't see a problem with subclassing extensively.
In my experience, delegates, data source methods, and informal protocols are much more useful when Apple has provided a class that already does something close to what you want it to do.
For example, say you're building an app that uses a table. There is (and I speak here of the iPhone SDK, since that's where I have experience) a class UITableView that does all the little niceties of creating a table for interaction with the user, and it's much more efficient to define a data source for an instance of UITableView than it is to completely subclass UITableView and redefine or extend its methods to customize its behavior.
Similar concepts go for delegates and protocols. If you can fit your ideas into Apple's classes, then it's usually easier (and will work more smoothly) to do so and use data source, delegates, and protocols than it is to create your own subclasses. It helps you avoid extra work and wasting time, and is usually less error-prone. Apple's classes have taken care of the business of making functions efficient and debugging; the more you can work with them, the fewer mistakes your program will have in the long run.
my impression of ADC's emphasis 'against' subclassing has more to do with the legacy of how the operating system has evolved... back in the day (Mac Classic aka os9) when c++ was the primary interface to most of the mac toolbox, subclassing was the de-facto standard in order for a programmer to modify the behaviour of commonplace OS features (and this was indeed sometimes a pain in the neck and meant that one had to be very careful that any and all modifications behaved predictably and didn't break any standard behaviour).
this being said, MY IMPRESSION of ADC's emphasis against subclassing is not putting forth a case for designing an application's class hierarchy without inheritance, BUT INSTEAD to point out that in the new way of doing things (ie OSX) there are in most cases more appropriate means to go about customizing standard behavior without needing to subclass.
So, by all means, design your puzzle program's architecture as robustly as you can, leveraging inheritance as you see fit!
looking forward to seeing your cool new puzzle application!
|K<
Apple indeed appears to passively discourage subclassing with Objective-C.
It is an axiom of OOP design to Favor composition over implementation.

Making Objective-C Classes look Beautiful

I wanted to ask you all for you opinions on code smells in Objective C, specifically Cocoa Touch. I'm working on a fairly complex game, and about to start the Great December Refactoring.
A good number of my classes, the models in particular, are full of methods that deal with internal business logic; I'll be hiding these in a private category, in my war against massive header files. Those private categories contain a large number of declarations, and this makes me feel uneasy... almost like Objective-C's out to make me feel guilty about all of these methods.
The more I refactor (a good thing!), the more I have to maintain all this duplication (not so good). It just feels wrong.
In a language like Ruby, the community puts a LOT of emphasis on very short, clear, beautiful methods. My question is, for Objective C (Cocoa Touch specifically), how long are your methods, how big are your controllers, and how many methods per class do you all find becomes typical in your projects? Are there any particularly nice, beautiful examples of Classes made up of short methods in Objective C, or is that simply not an important part of the language's culture?
DISCLOSURE: I'm currently reading "The Little Schemer", which should explain my sadness, re: Objective C.
Beauty is subjective. For me, an Objective-C class is beautiful if it is readable (I know what it is supposed to do) and maintainable (I can see what parts are responsible for doing what). I also don't like to be thrown out of reading code by an unfamiliar idiom. Sort of like when you are reading a book and you read something that takes you out of the immersion and reminds you that you are reading.
You'll probably get lots of different, mutually exclusive advice, but here are my thoughts.
Nothing wrong with private methods being in a private category. That's what it is there for. If you don't like the declarations clogging up the file either use code folding in the IDE, or have your extensions as a category in a different file.
Group related methods together and mark them with #pragma mark statements
Whatever code layout you use, consistency is important. Take a few minutes and write your own guidelines (here are mine) so if you forget what you are supposed to be doing you have a reference.
The controller doesn't have to be the delegate and datasource you can always have other classes for these.
Use descriptive names for methods and properties. Yes, you may document them, but you can't see documentation when Xcode applies code completion, where well named methods and properties pay off. Also, code comments get stale if they aren't updated while the code itself changes.
Don't try and write clever code. You might think that it's better to chain a sequence of method calls on one line, but the compiler is better at optimising than you might think. It's okay to use temporary variables to hold values (mostly these are just pointers anyway, so relatively small) if it improves readability. Write code for humans to read.
DRY applies to Objective-C as much as other languages. Don't be worried about refactoring code into more methods. There is nothing wrong with having lots of methods as long as they are useful.
The very first thing I do even before implementing class or method is to ask: "How would I want to use this from the outside?"
I never ever, never begin by writing the internals of my classes and methods first. By starting of with an elegant public API the internals tend to become elegant for free, and if they don't then the ugliness is at least contained to a single method or class, and not allowed to pollute the rest of the code with it's smell.
There are many design patterns out there, two decades of coding have taught me that the only pattern that stand the test of time is: KISS. Keep It Simple Stupid.
Some general rules of thumb, for any language or environment:
Follow your gut feeling over any advice you have read or heard!
Bail out early!
If needed, verify inputs early and bail out fast! Less cleanup to do.
Never add something to your code that you do not use.
An option for "reverse" might feel like something nice to have down the road.
In that case add it down the road! Do not waste time adding complexity you do not need.
Method names should describe what is done, never how it is done.
Methods should be allowed to change their implementation without changing their name as long as the result is the same.
If you can not understand what a method does from it's name then change the name!
If the how part is complex enough, then use comments to describe your implementation.
Do not fear the singletons!
If your app only have one data model, then it is a singleton!
Passing around a single variable all over the place is just pretending it is something else but a singleton and adding complexity as bonus.
Plan for failures from the start.
Always use for doFoo:error instead of doFoo: from the start.
Create nice NSError instances with end user readable localized descriptions from the start.
It is a major pain to retrofit error handling/messages to a large existing app.
And there will always be errors if you have users and IO involved!
Cocoa/Objective-C is Object* Oriented, not **Class Oriented as most of the popular kids out there that claim to be OOP.
Do not introduce a dumb value class with only properties, a class without methods performing actual work could just as well be a struct.
Let your objects be intelligent! Why add a whole new FooParser class if a fooFromString: method on Foo is all you need?
In Cocoa what you can do is always more important than what you are.
Do not introduce a protocol if a target/action can do.
Do not verify that instances conforms to protocols, is a kind of class, that is up to the compiler.
My 2 cents:
Properties are usually better than old-style getter+setter. Even if you use #dynamic properties - declare them with #property, this is way more informative and shorter.
I personally don't simulate "private" methods for classes. Yes, I can write a category somewhere in the .m(m) file, but since Obj-C has no pure way to declare a private method - why should I invent one? Anyway, even if you really need something like that - declare a separate "MyClassPrivate.h" with a category and include it in the .m(m) files to avoid duplicating the declarations.
Binding. Binding for most Controller <-> UI relations, use transformers, formatters, just don't write methods to read/write controls values manually. It makes code look like something from MFC era.
C++, a lot of code look much better and shorter when written in C++. Since compiler understands C++ classes it's a good point for refactoring, especially when working will a low-level code.
I usually split big controllers. Something more than 500 lines of code is a good candidate for refactoring for me. For instance, I have a document window controller, since some version of the app it extends with image importing/exporting options. Controller grows up to 1.000 lines where 1/2 is the "image stuff". That's a "trigger" for me to make an ImageStuffController, instantiate it in the NIB and put all image-relative code in there.
All above make it easier for me to maintain my code. For a huge projects, where splitting the controllers and classes to keep 'em small results big number of files, I usually try to extract some code into a framework. For example, if a big part of the app is communicating with external web-services, there is usually a straight way to extract a MyWebServices.framework from the main app.

Is subclassing in Objective-C a bad practice?

After reading lots of blogs, forum entries and several Apple docs, I still don't know whether extensive subclassing in Objective-C is a wise thing to do or not.
Take for example the following case:
Say I'm developing a puzzle game which
has a lot of elements. All of those
elements share a certain amount of the
same behaviour. Then, within my
collection of elements, different
groups of elements share equal
behaviour, distinguishing groups from
groups, etc...
So, after determining what inherits
from what, I decided to subclass out
of oblivion. And why shouldn't I?
Considering the ease tweaking general
behaviour takes with this model, I
think I accomplished something OOP is
meant for.
But, - and this is the source of my question - Apple mentions using delegates, data source methods, and informal protocols in favour of subclassing. It really boggles my mind why?
There seem to be two camps. Those in favor of subclassing, those in fafor of not. It depends on personal taste apparently. I'm wondering what the pros and cons are of subclassing massively and not subclassing massively?
To wrap it up, my question is simple: Am I right? And why or why not?
Delegation is a means of using the composition technique to replace some aspects of coding you would otherwise subclass for. As such, it boils down to the age old question of the task at hand needing one large thing that knows how to do a lot, or if you have a loose network of specialized objects (a very UNIX sort of model of responsibility).
Using a combination of delegates and protocols (to define what the delegates are supposed to be able to do) provides a great deal of flexibility of behavior and ease of coding - going back to that Liskov substitution principle, when you subclass you have to be careful you don't do anything a user of the whole class would find unexpected. But if you are simply making a delegate object then you have much less to be responsible for, only that the delegate methods you implement do what that one protocol calls for, beyond that you don't care.
There are still many good reasons to use subclasses, if you truly have shared behavior and variables between a number of classes it may make a lot of sense to subclass. But if you can take advantage of the delegate concept you'll often make your classes easier to extend or use in ways you the designer may not have expected.
I tend to be more of a fan of formal protocols than informal ones, because not only do formal protocols make sure you have the methods a class treating you as a delegate expect, but also because the protocol definition is a natural place to document what you expect from a delegate that implements those methods.
Personally, I follow this rule: I can create a subclass if it respects the Liskov substitution principle.
Subclassing has it's benefits, but it also has some drawbacks. As a general rule, I try to avoid implementation inheritance and instead use interface inheritance and delegation.
One of the reasons I do this is because when you inherit implementation, you can wind up with problems if you override methods but don't adhere to their (sometimes undocumented contract). Additionally, I find walking class hierarchies with implementation inheritance difficult because methods can be overridden or implemented at any level. Finally, when subclassing you can only widen an interface, you can't narrow it. This leads to leaky abstractions. A good example of this is java.util.Stack which extends java.util.Vector. I shouldn't be able to treat a stack as a Vector. Doing so only allows the consumer to run around the interface.
Others have mentioned the Liskov Substitution Principle. I think that using that would have certainly cleared up the java.util.Stack problem but it can also lead to very deep class hierarchies in order to put ensure that classes get only the methods they are supposed to have.
Instead, with interface inheritance there is essentially no class hierarchy because interfaces rarely need to extend one another. The classes simply implement the interfaces that they need to and can therefore be treated in the correct way by the consumer. Additionally, because there is no implementation inheritance, consumers of these classes won't infer their behavior due to previous experience with a parent class.
In the end though, it doesn't really matter which way you go. Both are perfectly acceptable. It's really more a matter of what you're more comfortable with and what the frameworks that you're working with encourage. As the old saying goes: "When in Rome do as Romans do."
There's nothing wrong with using inheritance in Objective-C. Apple uses it quite a bit. For instance, in Cocoa-touch, the inheritance tree of UIButton is UIControl : UIView : UIResponder : NSObject.
I think Martin hit on an important point in mentioning the Liskov substitution principle. Also, proper use of inheritance requires that the implementer of the subclass has a deep knowledge of the super class. If you've ever struggled to extend a non-trivial class in a complex framework, you know that there's always a learning curve. In addition, implementation details of the super class often "leak through" to the subclass, which is a big pain in the #$& for framework builders.
Apple chose to use delegation in many instances to address these problems; non-trivial classes like UIApplication expose common extension points through a delegate object so most developers have both an easier learning curve and a more loosely coupled way to add application specific behavior -- extending UIApplication directly is rarely necessary.
In your case, for your application specific code, use which ever techniques you're comfortable with and work best for your design. Inheritance is a great tool when used appropriately.
I frequently see application programmers draw lessons from framework designs and trying to apply them to their application code (this is common in Java, C++ and Python worlds as well as Objective-C). While it's good to think about and understand the choices framework designers made, those lessons don't always apply to application code.
In general you should avoid subclassing API classes if there exist delegates, etc that accomplish what you want to do. In your own code subclassing is often nicer, but it really does depend on your goals, eg. if you're providing an API you should provide a delegate based API rather than assuming subclassing.
When dealing with APIs subclassing has more potential bugs -- eg. if any class in the class hierarchy gets a new method that has the same name as your addition you make break stuff. And also, if you're providing a useful/helper type function there's a chance that in the future something similar will be added to the actual class you were subclassing, and that might be more efficient, etc but your override will hide it.
Please read the Apple documentation "Adding behavior to a Cocoa program"!. Under "Inheriting from a Cocoa class" section, see the 2nd paragraph. Apple clearly mentions that Subclassing is the primary way of adding application specific behavior to the framework (please note, FRAMEWORK).
MVC pattern does not completely disallow the use of subclasses or subtypes. Atleast I have not seen this recommendation from either Apple or others (if I have missed please feel free to point me to the right source of information about this). If you are subclassing api classes only within your application, please go ahead, no one's stopping you but do take care that it does not break the behavior of the class/api as a whole. Subclassing is great way of extending the framework api's functionality. We see a lot of subclassing within the Apple IOS framework APIs too.
As a developer one has to take care the implementation is well documented and not duplicated accidentally by another developer. Its another ball game altogether if your application is a set of API classes that you plan to distribute as reusable component.
IMHO, rather than asking around what the best practice is, first read the related documentation thoroughly, implement and test it. Make your own judgement. You know best about what you're up to.
It's easy for others (like me and so many others) to just read stuff from different sources on the Net and throw around terms. Be your own judge, it has worked for me so far.
I really think it depends on what you're trying to do. If the puzzle game you describe in the example really does have a set of unique elements that share common attributes, and there's no provided classes - say, for example, "NSPuzzlePiece" - that fit your needs, then I don't see a problem with subclassing extensively.
In my experience, delegates, data source methods, and informal protocols are much more useful when Apple has provided a class that already does something close to what you want it to do.
For example, say you're building an app that uses a table. There is (and I speak here of the iPhone SDK, since that's where I have experience) a class UITableView that does all the little niceties of creating a table for interaction with the user, and it's much more efficient to define a data source for an instance of UITableView than it is to completely subclass UITableView and redefine or extend its methods to customize its behavior.
Similar concepts go for delegates and protocols. If you can fit your ideas into Apple's classes, then it's usually easier (and will work more smoothly) to do so and use data source, delegates, and protocols than it is to create your own subclasses. It helps you avoid extra work and wasting time, and is usually less error-prone. Apple's classes have taken care of the business of making functions efficient and debugging; the more you can work with them, the fewer mistakes your program will have in the long run.
my impression of ADC's emphasis 'against' subclassing has more to do with the legacy of how the operating system has evolved... back in the day (Mac Classic aka os9) when c++ was the primary interface to most of the mac toolbox, subclassing was the de-facto standard in order for a programmer to modify the behaviour of commonplace OS features (and this was indeed sometimes a pain in the neck and meant that one had to be very careful that any and all modifications behaved predictably and didn't break any standard behaviour).
this being said, MY IMPRESSION of ADC's emphasis against subclassing is not putting forth a case for designing an application's class hierarchy without inheritance, BUT INSTEAD to point out that in the new way of doing things (ie OSX) there are in most cases more appropriate means to go about customizing standard behavior without needing to subclass.
So, by all means, design your puzzle program's architecture as robustly as you can, leveraging inheritance as you see fit!
looking forward to seeing your cool new puzzle application!
|K<
Apple indeed appears to passively discourage subclassing with Objective-C.
It is an axiom of OOP design to Favor composition over implementation.

Can Procedural Programming use Objects?

I have seen a number of different topics on StackOverFlow discussing the differences between Procedural and Object-Oriented Programming. The question is: If the program uses an object can it still be considered procedural?
Yes, and a lot of early Java was exactly that; you had a bunch of C programmers get into Java because it was "hot", people who didn't think in OOP. Lots of big classes with lots of static methods, lots of RTTI in case statements, lots of use of instanceof.
GLib has GObject which is object oriented programming implemented in pure C. While you can build up an API which begins to "feel" like OOP, it's still just plain "C" code with no actual classes (from the compiler's point of view). If you get far enough so you're starting to implement Object Oriented design patterns then I would call that OOP no matter what language it's written in. It's all about the feel of the code and how you have to think to write against it.
Procedural programming has to do with how you structure your program and model your domain. Just because at some point you instantiate an object, doesn't alone make your program oriented towards objects (i.e., object-oriented).
The distinction is entirely subjective. For example, if you code a C library using state passing, you are implementing something of a "tell" pattern, with the state as the object.
Classes can be considered as super types. When we converted from VB3 to VB6 our first pass was finding all the types we used, then finding all the subroutines and functions that took that type as a parameter. We moved those into the class definition, removed the parameter and then tested leaving the original flow of control intact
Then we refactored our flow of control to use various patterns and object oriented techniques.
The heart of object orientation is about how you decompose the problem into smaller parts, and how these parts work together. It's about the philosophy. Using OO language does not necessarily mean a program written in it is OO; it's just easier to do OO with a language that supports common OO concepts out of the box.
To answer the question: "If the program uses an object can it still be considered procedural?" - That depends on what your definitions of object and procedural programming are. But in my opinion, the answer is resounding "Yes". "Objects" are only a part of the philosophy that is OO and using them "somewhere in your application" does not mean you're doing OO.
The answer to your question is, yes. For example. I've got an old php legacy page to maintain. Most of the code is procedural but I decided that some things can be maintained much easier if I plug Zend Framework into the existing stuff and write some of my own classes to replace some of the old code. In general this application is still written and functioning in a mainly procedural way but here and then a class or another are instantiated and used. I guess there is no clear border between procedural and OO. You can do it cleaner or less clean. If you don't have enough layers for the size and complexity of your app you'll end up with more procedural code automatically too...

Why the claim that C# people don't get object-oriented programming? (vs class-oriented)

This caught my attention last night.
On the latest ALT.NET Podcast Scott Bellware discusses how as opposed to Ruby, languages like C#, Java et al. are not truly object oriented rather opting for the phrase "class-oriented". They talk about this distinction in very vague terms without going into much detail or discussing the pros and cons much.
What is the real difference here and how much does it matter? What are other languages then are "object-oriented"? It sounded pretty interesting but I don't want to have to learn Ruby just to know what if anything I am missing.
Update
After reading some of the answers below it seems like people generally agree that the reference is to duck-typing. What I'm not sure I understand still though is the claim that this ultimately changes all that much. Especially if you are already doing proper TDD with loose coupling etc. Can someone show me an example of a specific thing I could do with Ruby that I cannot do with C# and that exemplifies this different OOP approach?
In an object-oriented language, objects are defined by defining objects rather than classes, although classes can provide some useful templates for specific, cookie-cutter definitions of a given abstraction. In a class-oriented language, like C# for example, objects must be defined by classes, and these templates are usually canned and packaged and made immutable before runtime. This arbitrary constraint that objects must be defined before runtime and that the definitions of objects are immutable is not an object-oriented concept; it's class oriented.
The duck typing comments here are more attributing to the fact that Ruby and Python are more dynamic than C#. It doesn't really have anything to do with it's OO Nature.
What (I think) Bellware meant by that is that in Ruby, everything is an object. Even a class. A class definition is an instance of an object. As such, you can add/change/remove behavior to it at runtime.
Another good example is that NULL is an object as well. In ruby, everything is LITERALLY an object. Having such deep OO in it's entire being allows for some fun meta-programming techniques such as method_missing.
IMO, it's really overly defining "object-oriented", but what they are referring to is that Ruby, unlike C#, C++, Java, et al, does not make use of defining a class -- you really only ever work directly with objects. Conversely, in C# for example, you define classes that you then must instantiate into object by way of the new keyword. The key point being you must declare a class in C# or describe it. Additionally, in Ruby, everything -- even numbers, for example -- is an object. In contrast, C# still retains the concept of an object type and a value type. This in fact, I think illustrates the point they make about C# and other similar languages -- object type and value type imply a type system, meaning you have an entire system of describing types as opposed to just working with objects.
Conceptually, I think OO design is what provides the abstraction for use to deal complexity in software systems these days. The language is a tool use to implement an OO design -- some make it more natural than others. I would still argue that from a more common and broader definition, C# and the others are still object-oriented languages.
There are three pillars of OOP
Encapsulation
Inheritance
Polymorphism
If a language can do those three things it is a OOP language.
I am pretty sure the argument of language X does OOP better than language A will go on forever.
OO is sometimes defined as message oriented. The idea is that a method call (or property access) is really a message sent to another object. How the recieveing object handles the message is completely encapsulated. Often the message corresponds to a method which is then executed, but that is just an implementation detail. You can for example create a catch-all handler which is executed regardless of the method name in the message.
Static OO like in C# does not have this kind of encapsulation. A massage has to correspond to an existing method or property, otherwise the compiler will complain. Dynamic languages like Smalltalk, Ruby or Python does however support "message-based" OO.
So in this sense C# and other statically typed OO languages are not true OO, sine thay lack "true" encapsulation.
Update: Its the new wave.. which suggest everything that we've been doing till now is passe.. Seems to be propping up quite a bit in podcasts and books.. Maybe this is what you heard.
Till now we've been concerned with static classes and not unleashed the power of object oriented development. We've been doing 'class based dev.' Classes are fixed/static templates to create objects. All objects of a class are created equal.
e.g. Just to illustrate what I've been babbling about... let me borrow a Ruby code snippet from PragProg screencast I just had the privilege of watching.
'Prototype based development' blurs the line between objects and classes.. there is no difference.
animal = Object.new # create a new instance of base Object
def animal.number_of_feet=(feet) # adding new methods to an Object instance. What?
#number_of_feet = feet
end
def animal.number_of_feet
#number_of_feet
end
cat = animal.clone #inherits 'number_of_feet' behavior from animal
cat.number_of_feet = 4
felix = cat.clone #inherits state of '4' and behavior from cat
puts felix.number_of_feet # outputs 4
The idea being its a more powerful way to inherit state and behavior than traditional class based inheritance. It gives you more flexibility and control in certain "special" scenarios (that I've yet to fathom). This allows things like Mix-ins (re using behavior without class inheritance)..
By challenging the basic primitives of how we think about problems, 'true OOP' is like 'the Matrix' in a way... You keep going WTF in a loop. Like this one.. where the base class of Container can be either an Array or a Hash based on which side of 0.5 the random number generated is.
class Container < (rand < 0.5 ? Array : Hash)
end
Ruby, javascript and the new brigade seem to be the ones pioneering this. I'm still out on this one... reading up and trying to make sense of this new phenomenon. Seems to be powerful.. too powerful.. Useful? I need my eyes opened a bit more. Interesting times.. these.
I've only listened to the first 6-7 minutes of the podcast that sparked your question. If their intent is to say that C# isn't a purely object-oriented language, that's actually correct. Everything in C# isn't an object (at least the primitives aren't, though boxing creates an object containing the same value). In Ruby, everything is an object. Daren and Ben seem to have covered all the bases in their discussion of "duck-typing", so I won't repeat it.
Whether or not this difference (everything an object versus everything not an object) is material/significant is a question I can't readily answer because I don't have sufficient depth in Ruby to compare it to C#. Those of you who on here who know Smalltalk (I don't, though I wish I did) have probably been looking at the Ruby movement with some amusement since it was the first pure OO language 30 years ago.
Maybe they are alluding to the difference between duck typing and class hierarchies?
if it walks like a duck and quacks like a duck, just pretend it's a duck and kick it.
In C#, Java etc. the compiler fusses a lot about: Are you allowed to do this operation on that object?
Object Oriented vs. Class Oriented could therefore mean: Does the language worry about objects or classes?
For instance: In Python, to implement an iterable object, you only need to supply a method __iter__() that returns an object that has a method named next(). That's all there is to it: No interface implementation (there is no such thing). No subclassing. Just talking like a duck / iterator.
EDIT: This post was upvoted while I rewrote everything. Sorry, won't ever do that again. The original content included advice to learn as many languages as possible and to nary worry about what the language doctors think / say about a language.
That was an abstract-podcast indeed!
But I see what they're getting at - they just dazzled by Ruby Sparkle. Ruby allows you to do things that C-based and Java programmers wouldn't even think of + combinations of those things let you achieve undreamt of possibilities.
Adding new methods to a built-in String class coz you feel like it, passing around unnamed blocks of code for others to execute, mixins... Conventional folks are not used to objects changing too far from the class template.
Its a whole new world out there for sure..
As for the C# guys not being OO enough... dont take it to heart.. Just take it as the stuff you speak when you are flabbergasted for words. Ruby does that to most people.
If I had to recommend one language for people to learn in the current decade.. it would be Ruby. I'm glad I did.. Although some people may claim Python. But its like my opinion.. man! :D
I don't think this is specifically about duck typing. For instance C# supports limited duck-typing already - an example would be that you can use foreach on any class that implements MoveNext and Current.
The concept of duck-typing is compatible with statically typed languages like Java and C#, it's basically an extension of reflection.
This is really the case of static vs dynamic typing. Both are proper-OO, in as much as there is such a thing. Outside of academia it's really not worth debating.
Rubbish code can be written in either. Great code can be written in either. There's absolutely nothing functional that one model can do that the other can't.
The real difference is in the nature of the coding done. Static types reduce freedom, but the advantage is that everyone knows what they're dealing with. The opportunity to change instances on the fly is very powerful, but the cost is that it becomes hard to know what you're deaing with.
For instance for Java or C# intellisense is easy - the IDE can quickly produce a drop list of possibilities. For Javascript or Ruby this becomes a lot harder.
For certain things, for instance producing an API that someone else will code with, there is a real advantage in static typing. For others, for instance rapidly producing prototypes, the advantage goes to dynamic.
It's worth having an understanding of both in your skills toolbox, but nowhere near as important as understanding the one you already use in real depth.
Object Oriented is a concept. This concept is based upon certain ideas. The technical names of these ideas (actually rather principles that evolved over the time and have not been there from the first hour) have already been given above, I'm not going to repeat them. I'm rather explaining this as simple and non-technical as I can.
The idea of OO programming is that there are objects. Objects are small independent entities. These entities may have embedded information or they may not. If they have such information, only the entity itself can access it or change it. The entities communicate with each other by sending messages between each other. Compare this to human beings. Human beings are independent entities, having internal data stored in their brain and the interact with each other by communicating (e.g. talking to each other). If you need knowledge from someone's else brain, you cannot directly access it, you must ask him a question and he may answer that to you, telling you what you wanted to know.
And that's basically it. This is real idea behind OO programming. Writing these entities, define the communication between them and have them interact together to form an application. This concept is not bound to any language. It's just a concept and if you write your code in C#, Java, or Ruby, that is not important. With some extra work this concept can even be done in pure C, even though it is a functional language but it offers everything you need for the concept.
Different languages have now adopted this concept of OO programming and of course the concepts are not always equal. Some languages allow what other languages forbid, for example. Now one of the concepts that involved is the concept of classes. Some languages have classes, some don't. A class is a blueprint how an object looks like. It defines the internal data storage of an object, it defines the messages an object can understand and if there is inheritance (which is not mandatory for OO programming!), classes also defines from which other class (or classes if multiple inheritance is allowed) this class inherits (and which properties if selective inheritance exists). Once you created such a blueprint you can now generate an unlimited amount of objects build according to this blueprint.
There are OO languages that have no classes, though. How are objects then build? Well, usually dynamically. E.g. you can create a new blank object and then dynamically add internal structure like instance variables or methods (messages) to it. Or you can duplicate an already existing object, with all its properties and then modify it. Or possibly merge two objects into a new one. Unlike class based languages these languages are very dynamic, as you can generate objects dynamically during runtime in ways not even you the developer has thought about when starting writing the code.
Usually this dynamic has a price: The more dynamic a language is the more memory (RAM) objects will waste and the slower everything gets as program flow is extremely dynamically as well and it's hard for a compiler to generate effective code if it has no chance to predict code or data flow. JIT compilers can optimize some parts of that during runtime, once they know the program flow, however as these languages are so dynamically, program flow can change at any time, forcing the JIT to throw away all compilation results and re-compile the same code over and over again.
But this is a tiny implementation detail - it has nothing to do with the basic OO principle. It is nowhere said that objects need to be dynamic or must be alterable during runtime. The Wikipedia says it pretty well:
Programming techniques may include
features such as information hiding,
data abstraction, encapsulation,
modularity, polymorphism, and
inheritance.
http://en.wikipedia.org/wiki/Object-oriented_programming
They may or they may not. This is all not mandatory. Mandatory is only the presence of objects and that they must have ways to interact with each other (otherwise objects would be pretty useless if they cannot interact with each other).
You asked: "Can someone show me an example of a wonderous thing I could do with ruby that I cannot do with c# and that exemplifies this different oop approach?"
One good example is active record, the ORM built into rails. The model classes are dynamically built at runtime, based on the database schema.
This is really probably getting down to what these people see others doing in c# and java as opposed to c# and java supporting OOP. Most languages cane be used in different programming paradigms. For example, you can write procedural code in c# and scheme, and you can do functional-style programming in java. It is more about what you are trying to do and what the language supports.
I'll take a stab at this.
Python and Ruby are duck-typed. To generate any maintainable code in these languages, you pretty much have to use test driven development. As such, it is very important for a developer to easily inject dependencies into their code without having to create a giant supporting framework.
Successful dependency-injection depends upon on having a pretty good object model. The two are sort of two sides of the same coin. If you really understand how to use OOP, then you should by default create designs where dependencies can be easily injected.
Because dependency injection is easier in dynamically typed languages, the Ruby/Python developers feel like their language understands the lessons of OO much better than other statically typed counterparts.