Related
Since I've started using TDD I've been firmly convinced that it's a great way to write good correct pattern compliant code, without forcing my design decisions.
And I found this true in 80% scenarios, but I have problems when it comes to test certain tipe of objects which, for some reason, wrap and hide an object inside the implementation.
To give you an example let's think of a MyLocationManager objects which gives a common interface to my objects to be used, and wraps inside an NSLocationManager.
When I want to test such an object I have to supply a mock NSLocationManager of course.
I have of course the property/constructor injection method, but this means adding a property, or a constructor parameter, with an objects that I simply want to hide from the other objects: I've created MyLocationManager to wrap and hide NSLocationManager, why should I be exposing a property just to test it?
A method I've found which is pretty straightforward is to method swizzle NSLocationManager's methods, so I can exchange the actual implementation of a method with a mock one, but this seems pretty unclean and I don't know how safe it is.
As far as I can understand, there might be a Demeter Law's violation in not exposing a property constructor, but on the other hand, I think that in objective-c some flexibility on this pattern is accepted.
So my question is, there should be any way I'm not clearly seeing to adopt property/constructor injection, or method swizzling is a commonly used practice?
Are there any other techniques for this scenario adopted that I should better use?
On a footnote:
This problem is true even with objects that wraps networking code and classes like NSUrlSession.
Well, at one point the testing set-up can be more complicated than the code to test, so one might remember, what testing was invented for.
I think a pragmatic way is, to expose the property you need only in a separate header containing a separate class continuation.
After a long time of Test Driven Development experience, I find this old question of mine pretty simple to answer.
For some reason I was thinking that property injection and dependency injection where to avoid to mask something.
I simply don't think this anymore.
In the previous scenario of my original question the right answer from present-me is:
You have to expose the dependency of NSLocationManager, maybe providing a constructor injector method, and a convenience constructor method, to initialise the location manager with NSLocationManager.
There is no real need to hide the dependency even if it is a wrapper class, because in the exact moment you find yourself with the need to swizzle some methods, you're hacking the "internals" of your object and tweaking it without testing the interface, modifying the runtime behaviour in an uncontrolled manner.
If you wanna swizzle, swizzle ahead, but it's not the right choice.
I've recently reread the interesting tutorial from Mike Ash about How to create classes at Objective-C Runtime
I has been a long time I am wondering where to apply this powerful feature of the language. I always see an overkill solution to most of the ideas that come to my mind, and I eventually proceed with NSDictionary. What are your cases of use of creating classes at runtime? The only one I see is an Obj-C interpreter... More ideas?
There's some possible options I see, when someone need to create class in runtime
To hide information about it (It won't help in most cases, but... you can)
To perform multiple-inheritance (If you really need it :)
Using your own language(i.e. some XML-like), that can be interpreted by your program, writted in Obj-C (Something like NSProxy, but even better.)
Creating some Dynamic-Class that can change it's behavior in runtime
In general.. There is some possible usages of this. But in real world, in default service applications there's no need to do this, actually:)
It could be used for example along Core Data or any API related to a database to create new classes of objects unknown at compilation time. However, I doubt this is used often, it's mostly the mechanism the system uses itself when it runs a program...
KVO, in the Cocoa frameworks, is implemented by dynamically creating "notifying" versions of your classes. See http://www.mikeash.com/pyblog/friday-qa-2009-01-23.html
I wanted to ask you all for you opinions on code smells in Objective C, specifically Cocoa Touch. I'm working on a fairly complex game, and about to start the Great December Refactoring.
A good number of my classes, the models in particular, are full of methods that deal with internal business logic; I'll be hiding these in a private category, in my war against massive header files. Those private categories contain a large number of declarations, and this makes me feel uneasy... almost like Objective-C's out to make me feel guilty about all of these methods.
The more I refactor (a good thing!), the more I have to maintain all this duplication (not so good). It just feels wrong.
In a language like Ruby, the community puts a LOT of emphasis on very short, clear, beautiful methods. My question is, for Objective C (Cocoa Touch specifically), how long are your methods, how big are your controllers, and how many methods per class do you all find becomes typical in your projects? Are there any particularly nice, beautiful examples of Classes made up of short methods in Objective C, or is that simply not an important part of the language's culture?
DISCLOSURE: I'm currently reading "The Little Schemer", which should explain my sadness, re: Objective C.
Beauty is subjective. For me, an Objective-C class is beautiful if it is readable (I know what it is supposed to do) and maintainable (I can see what parts are responsible for doing what). I also don't like to be thrown out of reading code by an unfamiliar idiom. Sort of like when you are reading a book and you read something that takes you out of the immersion and reminds you that you are reading.
You'll probably get lots of different, mutually exclusive advice, but here are my thoughts.
Nothing wrong with private methods being in a private category. That's what it is there for. If you don't like the declarations clogging up the file either use code folding in the IDE, or have your extensions as a category in a different file.
Group related methods together and mark them with #pragma mark statements
Whatever code layout you use, consistency is important. Take a few minutes and write your own guidelines (here are mine) so if you forget what you are supposed to be doing you have a reference.
The controller doesn't have to be the delegate and datasource you can always have other classes for these.
Use descriptive names for methods and properties. Yes, you may document them, but you can't see documentation when Xcode applies code completion, where well named methods and properties pay off. Also, code comments get stale if they aren't updated while the code itself changes.
Don't try and write clever code. You might think that it's better to chain a sequence of method calls on one line, but the compiler is better at optimising than you might think. It's okay to use temporary variables to hold values (mostly these are just pointers anyway, so relatively small) if it improves readability. Write code for humans to read.
DRY applies to Objective-C as much as other languages. Don't be worried about refactoring code into more methods. There is nothing wrong with having lots of methods as long as they are useful.
The very first thing I do even before implementing class or method is to ask: "How would I want to use this from the outside?"
I never ever, never begin by writing the internals of my classes and methods first. By starting of with an elegant public API the internals tend to become elegant for free, and if they don't then the ugliness is at least contained to a single method or class, and not allowed to pollute the rest of the code with it's smell.
There are many design patterns out there, two decades of coding have taught me that the only pattern that stand the test of time is: KISS. Keep It Simple Stupid.
Some general rules of thumb, for any language or environment:
Follow your gut feeling over any advice you have read or heard!
Bail out early!
If needed, verify inputs early and bail out fast! Less cleanup to do.
Never add something to your code that you do not use.
An option for "reverse" might feel like something nice to have down the road.
In that case add it down the road! Do not waste time adding complexity you do not need.
Method names should describe what is done, never how it is done.
Methods should be allowed to change their implementation without changing their name as long as the result is the same.
If you can not understand what a method does from it's name then change the name!
If the how part is complex enough, then use comments to describe your implementation.
Do not fear the singletons!
If your app only have one data model, then it is a singleton!
Passing around a single variable all over the place is just pretending it is something else but a singleton and adding complexity as bonus.
Plan for failures from the start.
Always use for doFoo:error instead of doFoo: from the start.
Create nice NSError instances with end user readable localized descriptions from the start.
It is a major pain to retrofit error handling/messages to a large existing app.
And there will always be errors if you have users and IO involved!
Cocoa/Objective-C is Object* Oriented, not **Class Oriented as most of the popular kids out there that claim to be OOP.
Do not introduce a dumb value class with only properties, a class without methods performing actual work could just as well be a struct.
Let your objects be intelligent! Why add a whole new FooParser class if a fooFromString: method on Foo is all you need?
In Cocoa what you can do is always more important than what you are.
Do not introduce a protocol if a target/action can do.
Do not verify that instances conforms to protocols, is a kind of class, that is up to the compiler.
My 2 cents:
Properties are usually better than old-style getter+setter. Even if you use #dynamic properties - declare them with #property, this is way more informative and shorter.
I personally don't simulate "private" methods for classes. Yes, I can write a category somewhere in the .m(m) file, but since Obj-C has no pure way to declare a private method - why should I invent one? Anyway, even if you really need something like that - declare a separate "MyClassPrivate.h" with a category and include it in the .m(m) files to avoid duplicating the declarations.
Binding. Binding for most Controller <-> UI relations, use transformers, formatters, just don't write methods to read/write controls values manually. It makes code look like something from MFC era.
C++, a lot of code look much better and shorter when written in C++. Since compiler understands C++ classes it's a good point for refactoring, especially when working will a low-level code.
I usually split big controllers. Something more than 500 lines of code is a good candidate for refactoring for me. For instance, I have a document window controller, since some version of the app it extends with image importing/exporting options. Controller grows up to 1.000 lines where 1/2 is the "image stuff". That's a "trigger" for me to make an ImageStuffController, instantiate it in the NIB and put all image-relative code in there.
All above make it easier for me to maintain my code. For a huge projects, where splitting the controllers and classes to keep 'em small results big number of files, I usually try to extract some code into a framework. For example, if a big part of the app is communicating with external web-services, there is usually a straight way to extract a MyWebServices.framework from the main app.
When do you encourage programming against an interface and not directly to a concrete class?
A guideline that I follow is to create abstractions whenever code requires to cross a logical/physical boundary, most especially when infrastructure-related concerns are involved.
Another checkpoint would be if a dependency will likely change in the future, due to possible additional concerns code (such as caching, transactional awareness, invoking a webservice instead of in-process execution) or if such dependencies have direct references to infrastructure integration points.
If code depends on something that does not require control to cross a logical/physical boundary, I more or less don't create abstractions to interact with those.
Am I missing anything?
Also, use interfaces when
Multiple objects will need to be acted upon in a particular fashion, but are not fundamentally related. Perhaps many of your business objects access a particular utility object, and when they do they need to give a reference of themselves to that utility object so the utility object can call a particular method. Have that method in an interface and pass that interface to that utility object.
Passing around interfaces as parameters can be very helpful in unit testing. Even if you have just one type of object that sports a particular interface, and hence don't really need a defined interface, you might define/implement an interface solely to "fake" that object in unit tests.
related to the first 2 bullets, check out the Observer pattern and the Dependency Injection. I'm not saying to implement these patterns, but they illustrate types of places where interfaces are really helpful.
Another twist on this is for implementing a couple of the SOLID Principals, Open Closed principal and the Interface Segregation principle. Like the previous bullet, don't get stressed about strictly implementing these principals everywhere (right away at least), but use these concepts to help move your thinking away from just what objects go where to thinking more about contracts and dependency
In the end, let's not make it too complicated: we're in a strongly typed world in .NET. If you need to call a method or set a property but the object you're passing/using could be fundamentally different, use an interface.
I would add that if your code is not going to be referenced by another library (for a while at least), then the decision of whether to use an interface in a particular situation is one that you can responsibly put off. The "extract interface" refactoring is easy to do these days. In my current project, I've got an object being passed around that I'm thinking maybe I should switch to an interface; I'm not stressing about it.
Interfaces abstraction are convenient when doing unit test. It helps for mocking test objects. It very useful in TDD for developing without actually using data from your database.
If you don't need any features of the class that aren't found in the Interface...then why not always prefer the Interface implementation?
It will make your code easier to modify in the future and easier to test (mocking).
you have the right idea, already. i would only add a couple of notes to this...
first, abstraction does not mean 'interface'. for example, a "connection string" is an abstraction, even though it's just a string... it's not about the 'type' of the thing in question, it's about the intention of use for that thing.
and secondly, if you are doing test automation of any kind, look for the pain and friction that are exposed by writing the tests. if you find yourself having to set up too many external conditions for a test, it's a sign that you need a better abstraction between the thing your testing and the things it interacts with.
I think you've said it pretty well. Much of this will be a stylistic thing. There are open source projects I've looked at where everything has an interface and an implementation, and it's kind of frustrating, but it might make iterative development a little easier, since any objects implementation can break but dummies will still work. But honestly, I can dummy any class that doesn't overuse the final keyword by inheritance.
I would add to your list this: anything which can be thought of as a black box should be abstracted. This includes some of the things you've mentioned, but it also includes hairy algorithms, which are likely to have multiple useful implementations with different advantages for different situation.
Additionally, interfaces come in handy very often with composite objects. That's the only way something like java's swing library gets anything done, but it can also be useful for more mundane objects. (I personally like having an interface like ValidityChecker with ways to and-compose or or-compose subordinate ValidityCheckers.)
Most of the useful things that come with the Interface passing have been already said. However I would add:
implementing an interface to an object, or later multiple objects, FORCES all the implementers to follow an IDENTICAL pattern to implement contract with the object. This can be useful in case you have not so OOP-experienced-programmers actually writing the implementation code.
in some languages you can add attributes on the interface itself, which can be different from the actual object implementation attribute as sense and intent
This caught my attention last night.
On the latest ALT.NET Podcast Scott Bellware discusses how as opposed to Ruby, languages like C#, Java et al. are not truly object oriented rather opting for the phrase "class-oriented". They talk about this distinction in very vague terms without going into much detail or discussing the pros and cons much.
What is the real difference here and how much does it matter? What are other languages then are "object-oriented"? It sounded pretty interesting but I don't want to have to learn Ruby just to know what if anything I am missing.
Update
After reading some of the answers below it seems like people generally agree that the reference is to duck-typing. What I'm not sure I understand still though is the claim that this ultimately changes all that much. Especially if you are already doing proper TDD with loose coupling etc. Can someone show me an example of a specific thing I could do with Ruby that I cannot do with C# and that exemplifies this different OOP approach?
In an object-oriented language, objects are defined by defining objects rather than classes, although classes can provide some useful templates for specific, cookie-cutter definitions of a given abstraction. In a class-oriented language, like C# for example, objects must be defined by classes, and these templates are usually canned and packaged and made immutable before runtime. This arbitrary constraint that objects must be defined before runtime and that the definitions of objects are immutable is not an object-oriented concept; it's class oriented.
The duck typing comments here are more attributing to the fact that Ruby and Python are more dynamic than C#. It doesn't really have anything to do with it's OO Nature.
What (I think) Bellware meant by that is that in Ruby, everything is an object. Even a class. A class definition is an instance of an object. As such, you can add/change/remove behavior to it at runtime.
Another good example is that NULL is an object as well. In ruby, everything is LITERALLY an object. Having such deep OO in it's entire being allows for some fun meta-programming techniques such as method_missing.
IMO, it's really overly defining "object-oriented", but what they are referring to is that Ruby, unlike C#, C++, Java, et al, does not make use of defining a class -- you really only ever work directly with objects. Conversely, in C# for example, you define classes that you then must instantiate into object by way of the new keyword. The key point being you must declare a class in C# or describe it. Additionally, in Ruby, everything -- even numbers, for example -- is an object. In contrast, C# still retains the concept of an object type and a value type. This in fact, I think illustrates the point they make about C# and other similar languages -- object type and value type imply a type system, meaning you have an entire system of describing types as opposed to just working with objects.
Conceptually, I think OO design is what provides the abstraction for use to deal complexity in software systems these days. The language is a tool use to implement an OO design -- some make it more natural than others. I would still argue that from a more common and broader definition, C# and the others are still object-oriented languages.
There are three pillars of OOP
Encapsulation
Inheritance
Polymorphism
If a language can do those three things it is a OOP language.
I am pretty sure the argument of language X does OOP better than language A will go on forever.
OO is sometimes defined as message oriented. The idea is that a method call (or property access) is really a message sent to another object. How the recieveing object handles the message is completely encapsulated. Often the message corresponds to a method which is then executed, but that is just an implementation detail. You can for example create a catch-all handler which is executed regardless of the method name in the message.
Static OO like in C# does not have this kind of encapsulation. A massage has to correspond to an existing method or property, otherwise the compiler will complain. Dynamic languages like Smalltalk, Ruby or Python does however support "message-based" OO.
So in this sense C# and other statically typed OO languages are not true OO, sine thay lack "true" encapsulation.
Update: Its the new wave.. which suggest everything that we've been doing till now is passe.. Seems to be propping up quite a bit in podcasts and books.. Maybe this is what you heard.
Till now we've been concerned with static classes and not unleashed the power of object oriented development. We've been doing 'class based dev.' Classes are fixed/static templates to create objects. All objects of a class are created equal.
e.g. Just to illustrate what I've been babbling about... let me borrow a Ruby code snippet from PragProg screencast I just had the privilege of watching.
'Prototype based development' blurs the line between objects and classes.. there is no difference.
animal = Object.new # create a new instance of base Object
def animal.number_of_feet=(feet) # adding new methods to an Object instance. What?
#number_of_feet = feet
end
def animal.number_of_feet
#number_of_feet
end
cat = animal.clone #inherits 'number_of_feet' behavior from animal
cat.number_of_feet = 4
felix = cat.clone #inherits state of '4' and behavior from cat
puts felix.number_of_feet # outputs 4
The idea being its a more powerful way to inherit state and behavior than traditional class based inheritance. It gives you more flexibility and control in certain "special" scenarios (that I've yet to fathom). This allows things like Mix-ins (re using behavior without class inheritance)..
By challenging the basic primitives of how we think about problems, 'true OOP' is like 'the Matrix' in a way... You keep going WTF in a loop. Like this one.. where the base class of Container can be either an Array or a Hash based on which side of 0.5 the random number generated is.
class Container < (rand < 0.5 ? Array : Hash)
end
Ruby, javascript and the new brigade seem to be the ones pioneering this. I'm still out on this one... reading up and trying to make sense of this new phenomenon. Seems to be powerful.. too powerful.. Useful? I need my eyes opened a bit more. Interesting times.. these.
I've only listened to the first 6-7 minutes of the podcast that sparked your question. If their intent is to say that C# isn't a purely object-oriented language, that's actually correct. Everything in C# isn't an object (at least the primitives aren't, though boxing creates an object containing the same value). In Ruby, everything is an object. Daren and Ben seem to have covered all the bases in their discussion of "duck-typing", so I won't repeat it.
Whether or not this difference (everything an object versus everything not an object) is material/significant is a question I can't readily answer because I don't have sufficient depth in Ruby to compare it to C#. Those of you who on here who know Smalltalk (I don't, though I wish I did) have probably been looking at the Ruby movement with some amusement since it was the first pure OO language 30 years ago.
Maybe they are alluding to the difference between duck typing and class hierarchies?
if it walks like a duck and quacks like a duck, just pretend it's a duck and kick it.
In C#, Java etc. the compiler fusses a lot about: Are you allowed to do this operation on that object?
Object Oriented vs. Class Oriented could therefore mean: Does the language worry about objects or classes?
For instance: In Python, to implement an iterable object, you only need to supply a method __iter__() that returns an object that has a method named next(). That's all there is to it: No interface implementation (there is no such thing). No subclassing. Just talking like a duck / iterator.
EDIT: This post was upvoted while I rewrote everything. Sorry, won't ever do that again. The original content included advice to learn as many languages as possible and to nary worry about what the language doctors think / say about a language.
That was an abstract-podcast indeed!
But I see what they're getting at - they just dazzled by Ruby Sparkle. Ruby allows you to do things that C-based and Java programmers wouldn't even think of + combinations of those things let you achieve undreamt of possibilities.
Adding new methods to a built-in String class coz you feel like it, passing around unnamed blocks of code for others to execute, mixins... Conventional folks are not used to objects changing too far from the class template.
Its a whole new world out there for sure..
As for the C# guys not being OO enough... dont take it to heart.. Just take it as the stuff you speak when you are flabbergasted for words. Ruby does that to most people.
If I had to recommend one language for people to learn in the current decade.. it would be Ruby. I'm glad I did.. Although some people may claim Python. But its like my opinion.. man! :D
I don't think this is specifically about duck typing. For instance C# supports limited duck-typing already - an example would be that you can use foreach on any class that implements MoveNext and Current.
The concept of duck-typing is compatible with statically typed languages like Java and C#, it's basically an extension of reflection.
This is really the case of static vs dynamic typing. Both are proper-OO, in as much as there is such a thing. Outside of academia it's really not worth debating.
Rubbish code can be written in either. Great code can be written in either. There's absolutely nothing functional that one model can do that the other can't.
The real difference is in the nature of the coding done. Static types reduce freedom, but the advantage is that everyone knows what they're dealing with. The opportunity to change instances on the fly is very powerful, but the cost is that it becomes hard to know what you're deaing with.
For instance for Java or C# intellisense is easy - the IDE can quickly produce a drop list of possibilities. For Javascript or Ruby this becomes a lot harder.
For certain things, for instance producing an API that someone else will code with, there is a real advantage in static typing. For others, for instance rapidly producing prototypes, the advantage goes to dynamic.
It's worth having an understanding of both in your skills toolbox, but nowhere near as important as understanding the one you already use in real depth.
Object Oriented is a concept. This concept is based upon certain ideas. The technical names of these ideas (actually rather principles that evolved over the time and have not been there from the first hour) have already been given above, I'm not going to repeat them. I'm rather explaining this as simple and non-technical as I can.
The idea of OO programming is that there are objects. Objects are small independent entities. These entities may have embedded information or they may not. If they have such information, only the entity itself can access it or change it. The entities communicate with each other by sending messages between each other. Compare this to human beings. Human beings are independent entities, having internal data stored in their brain and the interact with each other by communicating (e.g. talking to each other). If you need knowledge from someone's else brain, you cannot directly access it, you must ask him a question and he may answer that to you, telling you what you wanted to know.
And that's basically it. This is real idea behind OO programming. Writing these entities, define the communication between them and have them interact together to form an application. This concept is not bound to any language. It's just a concept and if you write your code in C#, Java, or Ruby, that is not important. With some extra work this concept can even be done in pure C, even though it is a functional language but it offers everything you need for the concept.
Different languages have now adopted this concept of OO programming and of course the concepts are not always equal. Some languages allow what other languages forbid, for example. Now one of the concepts that involved is the concept of classes. Some languages have classes, some don't. A class is a blueprint how an object looks like. It defines the internal data storage of an object, it defines the messages an object can understand and if there is inheritance (which is not mandatory for OO programming!), classes also defines from which other class (or classes if multiple inheritance is allowed) this class inherits (and which properties if selective inheritance exists). Once you created such a blueprint you can now generate an unlimited amount of objects build according to this blueprint.
There are OO languages that have no classes, though. How are objects then build? Well, usually dynamically. E.g. you can create a new blank object and then dynamically add internal structure like instance variables or methods (messages) to it. Or you can duplicate an already existing object, with all its properties and then modify it. Or possibly merge two objects into a new one. Unlike class based languages these languages are very dynamic, as you can generate objects dynamically during runtime in ways not even you the developer has thought about when starting writing the code.
Usually this dynamic has a price: The more dynamic a language is the more memory (RAM) objects will waste and the slower everything gets as program flow is extremely dynamically as well and it's hard for a compiler to generate effective code if it has no chance to predict code or data flow. JIT compilers can optimize some parts of that during runtime, once they know the program flow, however as these languages are so dynamically, program flow can change at any time, forcing the JIT to throw away all compilation results and re-compile the same code over and over again.
But this is a tiny implementation detail - it has nothing to do with the basic OO principle. It is nowhere said that objects need to be dynamic or must be alterable during runtime. The Wikipedia says it pretty well:
Programming techniques may include
features such as information hiding,
data abstraction, encapsulation,
modularity, polymorphism, and
inheritance.
http://en.wikipedia.org/wiki/Object-oriented_programming
They may or they may not. This is all not mandatory. Mandatory is only the presence of objects and that they must have ways to interact with each other (otherwise objects would be pretty useless if they cannot interact with each other).
You asked: "Can someone show me an example of a wonderous thing I could do with ruby that I cannot do with c# and that exemplifies this different oop approach?"
One good example is active record, the ORM built into rails. The model classes are dynamically built at runtime, based on the database schema.
This is really probably getting down to what these people see others doing in c# and java as opposed to c# and java supporting OOP. Most languages cane be used in different programming paradigms. For example, you can write procedural code in c# and scheme, and you can do functional-style programming in java. It is more about what you are trying to do and what the language supports.
I'll take a stab at this.
Python and Ruby are duck-typed. To generate any maintainable code in these languages, you pretty much have to use test driven development. As such, it is very important for a developer to easily inject dependencies into their code without having to create a giant supporting framework.
Successful dependency-injection depends upon on having a pretty good object model. The two are sort of two sides of the same coin. If you really understand how to use OOP, then you should by default create designs where dependencies can be easily injected.
Because dependency injection is easier in dynamically typed languages, the Ruby/Python developers feel like their language understands the lessons of OO much better than other statically typed counterparts.