Dependency injection framework for Cocoa? [closed] - objective-c

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Interface Builder can be used for basic dependency injection in a Cocoa app, but is anyone aware of more complete dependency injection frameworks for Objective-C/Cocoa for when you don't want to instantiate objects in a NIB file?
Edit
To clarify, I recognize that IB can be used for basic DI, but I'm looking for a framework with more complete functionality, including separate production and testing configurations, along the lines of Groovy or Springs.

objection by AtomicObject. It is molded in the image of Guice.

I'll go out on a limb and speak on this. Dependency injection as described by the top answer doesn't address the core issue that those seeking to use it are having. We'd like a means of development where component A does not directly instantiate or reference component B. Component A is bound by protocol to component B and is not referenced at all by component A. This allows component B to be replaced at anytime without ever touching component A. I down voted but I will research your references as it seems there are a few who agree with you. I'm not trying to debate, just looking to learn. I'd like to understand more about the "nope you don't need to do that" approach.

I think you'll find that you don't need it in late-binding languages like Objective C, Ruby, Lisp and so on. Like Jamis' revelation that he was going down an overly complex path when he tried to build needle, a DI framework for Ruby- Net::SSH revisited.
Here are some links that will hopefully give you some sample code to do similar things in Objective C. With categories you can essentially change any class's behavior at runtime. See Mac Developer Tips – Objective-C: Categories and the Cocoa API docs on categories. Essentially you don't need some central place to ask for "the thing that does x" that is configurable, because you can just instantiate TheThingThatDoesX directly and if something else needs to change/hook into that behavior it can use categories.

Typhoon
Almost one year ago, I released: https://github.com/typhoon-framework/Typhoon
The Typhoon-website lists the key features. A quick summary:
Non-invasive. No macros or XML required. Uses a powerful Objective-C runtime approach.
Makes it easy to have multiple configurations of the same base-class or protocol.
No magic strings - supports IDE refactoring, code-completion and compile-time checking.
Supports injection of view controllers and storyboard integration.
Supports both initializer and property injection, plus life-cycle management.
Powerful memory management features. Provides pre-configured objects, without the memory overhead of singletons.
Excellent support for circular dependencies.
Lean. It has a very low footprint, so is appropriate for CPU and memory constrained devices.
Battle-tested - used in all kinds of Appstore-featured apps
An internationally distributed core team (we even monitor StackOverflow), so support for any of your questions are never far away :)
API Docs and sample app
API docs: http://www.typhoonframework.org/docs/latest/api/
We have a nice sample app: https://github.com/jasperblues/Typhoon-example
Quality Control:
We also maintain a robust quality control system.
Every commit triggers a series of regression tests
We maintain high test coverage.

You don't have to instantiate the object in the NIB file. If you set the File's Owner to your object's class and then link things in the view/window/whatever up to that, you can set your object as the owner at runtime by loading the nib file manually. That way you can have a dynamic instance of an object that still gets dependencies injected properly.

What about dependecy injection implementation at Objective-IOC

What about ObjectivePim?
ObjectivePim

I’ve written a very simple DI container, the code is on GitHub. It can only do the bare basics, ie. discover the dependencies of an object and satisfy them using other given objects. I have found that to be usable in real-world applications, the code is very simple and it’s fun to hack with.

Has any looked at the Associative References feature of Mac OS X 10.6?
I believe with this it would be possible to build or already have something similar to DI.
As far as I have seen however any reference that is needed in an object has to be fetched manually using objc_getAssociatedObject().
Manfred

Interface Builder does not do ANY dependency injection. It does not need to. Interface Builder serializes objects. When a nib is "awoken" (aka opened), there are no "dependencies" to resolve -- there are just properties to set. Very, very simple. Opening a nib relies solely on the NSCoding protocol and key-value coding.
Dependency injection, pretty much a make-work project at the best of times, or at best a generalized glue layer between components designed independently, is of no use in well written Objective-C code. You are asking for a tool that you don't need.
In Objective-C, software that requires an anonymous service declares a Protocol. Services then adopt this protocol. Clients load services as dynamic plug-ins. On the other hand, if the server was written prior to the client, it is simply a matter of writing a new plug-in which adapts the existing interface to the protocol. This is less work, and more straightforward than trying to define an intermediate data-driven system for "discovering" (please) an interface at runtime.
Is it not obvious to everyone that the big secret of DI is just that it's a way to write code in XML instead of in the native language? I'd really like to hear a good argument as to how XML is somehow a better programming language than a real programming language. It doesn't make any sense.

I work with Spring all day and I've checked Groovy. I'm by no means an XCode/Cocoa expert, but IB does only some dependency injection, which Groovy doesn't even really claims to be doing.
I reckon you are not looking for DI, but rather for a well compiled set of integrated libraries which saves you from typing a lot of code which other people also have typed. I think there are no Spring like frameworks for Cocoa because for some reason people tend to see "Open Source" as "not platform dependant" and therefore Cocoa is a bit left out in the cold.
Depending on your needs though, there are some nice free open source libraries available for Cocoa, all listed on CocoaDev in a nice list.
I know it isn't Spring, but I hope it helps.

DI is a property of a runtime execution enviroment requiring dynamic binding. I'm very new to Obj-C and Cocoa so I may speak out of turn. Unless I'm missing something, I don't see how one could implement DI except by interpreting Obj C rather than compiling it, or by modifying the runtime environment.
I suspect that the DI like behaviour of IB is because there is a domain specific runtime environment associated with apps that are built with it.
I'm happy to be corrected though.
Categories appear to be an implementation of mixin's, allowing dynamic dispatch of methods to a delegate. Rather cool and similar to Java's interface concept, thought the details differ and from the following, I can't see if constants can be defined in a category, though member fields cannot.
objective-c categories

Related

Where do I find conceptual documentation for Windows Runtime? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm trying to learn enough about Windows Runtime to make a recommendation about what it would entail for my employer to port our existing applications to it. I'm having trouble finding documentation that provides a technical overview of how the API works.
All my web searches seem to lead me to API reference on MSDN, which is terse to the point of unreadability. It documents the formal signatures of API classes and methods, but seems to assume that the reader already knows how things fit together. The purpose of each method is usually just described as a terse sentence fragment that restates its name with spaces instead of CamelCase, and further explanations about restrictions, expectations and invariants beyond what is evident in the type declarations are almost completely absent. (This contrasts with the fairly informative "Remarks" sections in the reference documentation of the ordinary Win32 API).
Clearly, I'm not supposed to be using this documentation to develop an initial overview of how the API works. What am I supposed to be using?
Moving one level up in MSDN there is a section with the promising name Concepts and architecture, and some even more promising-sounding Programming concepts and Fundamentals -- but what they actually describe is a seemingly random selection of fairly specialized topics, certainly not what I need to make sense of the API reference.
Is there official documentation in book form that I need to buy and read? Something outside of MSDN? A secret MSDN link that I haven't been able to find?
I've seen this previous question which didn't get any real answers, perhaps because the it was phrased rather opaquely with $5 words like "ontology". In an attempt to explain better what I'm looking for, here are some examples of questions I hope the documentation I seek would tell me the answers to:
(Note that these are examples only. My primary goal is to find a specification that answers these and similar questions, rather than get answers to these specific examples.)
Windows.Networking.Sockets.StreamSocket has an InputStream property of type Windows.Storage.Streams.IInputStream, which I'm clearly supposed to use to read from the socket. But the only method of IInputStream is ReadAsync which reads into an IBuffer, and IBuffer is an interface that declares nothing but capacity and size properties. How do I get at the actual bytes being read? If I implement IBuffer myself, how will the system deliver them to me?
After hours of frustrated clicking and googling, I have tentatively concluded that the interface is a lie -- IBuffer is not something anyone can implement, but ReadAsync wants specifically a Windows.Storage.Streams.Buffer (without the I), no matter what its type declaration says. Then it seems I can use DataReader to read the actual bytes from the Buffer. Is that really how it's supposed to go?
or
Hmm, it looks like DataReader has a constructor that takes an IInputStream, so perhaps I can cut out the Buffer middleman after all. However, this seems to be wrong, because DataReader's methods such as ReadBytes are synchronous and supposedly all I/O in WinRT is asynchronous; certainly the one declared method of IInputStream is. So how does that work?
After more frustrated googling and clicking: Oh, there's a LoadAsync method in DataReader that does ... something. According to MSDN, it "loads data from the input stream", but what are the conventions for using it? Am I supposed to call it just once immediately after constructing the DataReader, or can I call it multiple times to reuse the same DataReader for the next read operation? Does DataReader contain a circular buffer internally? What happens if I try to read more bytes than have been read asynchronously already? The super-terse documentation of the ReadFoo methods mention no exceptions or error conditions; neither do the class documentation for DataReader or IDataReader.
or
Apparently apps can be multi-threaded, since the supported Win32 APIs include things like InterlockedCompareExchange, EnterCriticalSection and so forth. But neither CreateThread nor the RTL's _beginthreadex seem to be supported, and there doesn't appear to be any Java-ish Thread class anywhere in the WinRT class hierarchy. How does one start a new thread?
or
Speaking about asynchronous I/O ... I'm quite comfortable with the general idea of asynchronous I/O and completion continuations, but what are the precise rules in WinRT for, say, which thread the completion routine is called in? If it's always the same thread I started the I/O operation from (which I hope!), do I need to make sure it enters some kind of alertable wait from time to time, so the system has a chance to call my code there?
or
Wikipedia claims that "WinRT is essentially a COM-based API, although relying on an enhanced COM." What exactly is this "enhancement"? If I follow COM rules and conventions, do I risk being bitten by things that work differently due to "enhancements"? Or, conversely, is there things I can do easier because of the enhancement?
or
The only description of how asynchronous callbacks work make it look like they are quite specific to the implementation language -- it looks fairly different between C#/CLR, JavaScript and C++/CX. What's actually happening at the COM/ABI level here? In particular, since the API documentation appears to assume that "C++" means "C++/CX", how does asynchronous I/O work if I use WRL instead? Or is it just the case that the await and then business is just language-provided sugar and the real ABI is always in terms of AsyncOperationWithProgressCompletedHandler and so forth, as described in the API reference? But that's a delegate type; does that even have a well-defined meaning in terms of COM?
I've just noticed that there seem to be two parallel page hierarchies on MSDN describing the WinRT API:
Windows Store app development | API Reference | Windows API reference for Windows Store apps
This is the almost vacuous documentation I rant about in the question. However, some of the API elements are also described in
MSDN Library | Additional Resources | Windows Runtime C++ reference
which is slightly closer to the COM metal, and occasionally contains useful Remarks sections. For example its page for IBuffer reveals that implementations of IBuffer must also implement IBufferByteAccess which provides access to the actual bytes.
It is not ideal (and still seems to leave a lot of information implicit), but it is at least something.
I think this post could be a https://chat.stackoverflow.com/ debate, but not a question.
WinRT Api and projections (c#/xaml or html/js etc.) references are in their first release and from my point of view at this time they are just a basic reference, not a extense documentation source.
This happpen usually with all recently created technologies, I think you just need to wait a couple of months and documentation will start to improve gradually.

How to understand the big picture in a loose coupled application?

We have been developing code using loose coupling and dependency injection.
A lot of "service" style classes have a constructor and one method that implements an interface. Each individual class is very easy to understand in isolation.
However, because of the looseness of the coupling, looking at a class tells you nothing about the classes around it or where it fits in the larger picture.
It's not easy to jump to collaborators using Eclipse because you have to go via the interfaces. If the interface is Runnable, that is no help in finding which class is actually plugged in. Really it's necessary to go back to the DI container definition and try to figure things out from there.
Here's a line of code from a dependency injected service class:-
// myExpiryCutoffDateService was injected,
Date cutoff = myExpiryCutoffDateService.get();
Coupling here is as loose as can be. The expiry date be implemented literally in any manner.
Here's what it might look like in a more coupled application.
ExpiryDateService = new ExpiryDateService();
Date cutoff = getCutoffDate( databaseConnection, paymentInstrument );
From the tightly coupled version, I can infer that the cutoff date is somehow determined from the payment instrument using a database connection.
I'm finding code of the first style harder to understand than code of the second style.
You might argue that when reading this class, I don't need to know how the cutoff date is figured out. That's true, but if I'm narrowing in on a bug or working out where an enhancement needs to slot in, that is useful information to know.
Is anyone else experiencing this problem? What solutions have you? Is this just something to adjust to? Are there any tools to allow visualisation of the way classes are wired together? Should I make the classes bigger or more coupled?
(Have deliberately left this question container-agnostic as I'm interested in answers for any).
While I don't know how to answer this question in a single paragraph, I attempted to answer it in a blog post instead: http://blog.ploeh.dk/2012/02/02/LooseCouplingAndTheBigPicture.aspx
To summarize, I find that the most important points are:
Understanding a loosely coupled code base requires a different mindset. While it's harder to 'jump to collaborators' it should also be more or less irrelevant.
Loose coupling is all about understanding a part without understanding the whole. You should rarely need to understand it all at the same time.
When zeroing in on a bug, you should rely on stack traces rather than the static structure of the code in order to learn about collaborators.
It's the responsibility of the developers writing the code to make sure that it's easy to understand - it's not the responsibility of the developer reading the code.
Some tools are aware of DI frameworks and know how to resolve dependencies, allowing you to navigate your code in a natural way. But when that isn't available, you just have to use whatever features your IDE provides as best you can.
I use Visual Studio and a custom-made framework, so the problem you describe is my life. In Visual Studio, SHIFT+F12 is my friend. It shows all references to the symbol under the cursor. After a while you get used to the necessarily non-linear navigation through your code, and it becomes second-nature to think in terms of "which class implements this interface" and "where is the injection/configuration site so I can see which class is being used to satisfy this interface dependency".
There are also extensions available for VS which provide UI enhancements to help with this, such as Productivity Power Tools. For instance, you can hover over an interface, a info box will pop up, and you can click "Implemented By" to see all the classes in your solution implementing that interface. You can double-click to jump to the definition of any of those classes. (I still usually just use SHIFT+F12 anyway).
I just had an internal discussion about this, and ended up writing this piece, which I think is too good not to share. I'm copying it here (almost) unedited, but even though it's part of a bigger internal discussion, I think most of it can stand alone.
The discussion is about introduction of a custom interface called IPurchaseReceiptService, and whether or not it should be replaced with use of IObserver<T>.
Well, I can't say that I have strong data points about any of this - it's just some theories that I'm pursuing... However, my theory about cognitive overhead at the moment goes something like this: consider your special IPurchaseReceiptService:
public interface IPurchaseReceiptService
{
void SendReceipt(string transactionId, string userGuid);
}
If we keep it as the Header Interface it currently is, it only has that single SendReceipt method. That's cool.
What's not so cool is that you had to come up with a name for the interface, and another name for the method. There's a bit of overlap between the two: the word Receipt appears twice. IME, sometimes that overlap can be even more pronounced.
Furthermore, the name of the interface is IPurchaseReceiptService, which isn't particularly helpful either. The Service suffix is essentially the new Manager, and is, IMO, a design smell.
Additionally, not only did you have to name the interface and the method, but you also have to name the variable when you use it:
public EvoNotifyController(
ICreditCardService creditCardService,
IPurchaseReceiptService purchaseReceiptService,
EvoCipher cipher
)
At this point, you've essentially said the same thing thrice. This is, according to my theory, cognitive overhead, and a smell that the design could and should be simpler.
Now, contrast this to use of a well-known interface like IObserver<T>:
public EvoNotifyController(
ICreditCardService creditCardService,
IObserver<TransactionInfo> purchaseReceiptService,
EvoCipher cipher
)
This enables you to get rid of the bureaucracy and reduce the design the the heart of the matter. You still have intention-revealing naming - you only shift the design from a Type Name Role Hint to an Argument Name Role Hint.
When it comes to the discussion about 'disconnectedness', I'm under no illusion that use of IObserver<T> will magically make this problem go away, but I have another theory about this.
My theory is that the reason many programmers find programming to interfaces so difficult is exactly because they are used to Visual Studio's Go to definition feature (incidentally, this is yet another example of how tooling rots the mind). These programmers are perpetually in a state of mind where they need to know what's 'on the other side of an interface'. Why is this? Could it be because the abstraction is poor?
This ties back to the RAP, because if you confirm programmers' belief that there's a single, particular implementation behind every interface, it's no wonder they think that interfaces are only in the way.
However, if you apply the RAP, I hope that slowly, programmers will learn that behind a particular interface, there may be any implementation of that interface, and their client code must be able to handle any implementation of that interface without changing the correctness of the system. If this theory holds, we've just introduced the Liskov Substitution Principle into a code base without scaring anyone with high-brow concepts they don't understand :)
However, because of the looseness of the coupling, looking at a class
tells you nothing about the classes around it or where it fits in the
larger picture.
This is not accurate.For each class you know exactly what kind of objects the class depends on, to be able to provide its functionality at runtime.
You know them since you know that what objects are expected to be injected.
What you don't know is the actual concrete class that will be injected at runtime which will implement the interface or base class that you know your class(es) depend on.
So if you want to see what is the actual class injected, you just have to look at the configuration file for that class to see the concrete classes that are injected.
You could also use facilities provided by your IDE.
Since you refer to Eclipse then Spring has a plugin for it, and has also a visual tab displaying the beans you configure. Did you check that? Isn't it what you are looking for?
Also check out the same discussion in Spring Forum
UPDATE:
Reading your question again, I don't think that this is a real question.
I mean this in the following manner.
Like all things loose coupling is not a panacea and has its own disadvantages per se.
Most tend to focus on the benefits but as any solution it has its disadvantages.
What you do in your question is describe one of its main disadvantages which is that it indeed is not easy to see the big picture since you have everything configurable and plugged in by anything.
There are other drawbacks as well that one could complaint e.g. that it is slower than tight coupled applications and still be true.
In any case, re-iterating, what you describe in your question is not a problem you stepped upon and can find a standard solution (or any for that manner).
It is one of the drawbacks of loose coupling and you have to decide if this cost is higher than what you actually gain by it, like in any design-decision trade off.
It is like asking:
Hey I am using this pattern named Singleton. It works great but I can't create new objects!How can I get arround this problem guys????
Well you can't; but if you need to, perhaps singleton is not for you....
One thing that helped me is placing multiple closely related classes in the same file. I know this goes against the general advice (of having 1 class per file) and I generally agree with this, but in my application architecture it works very well. Below I will try to explain in which case this is.
The architecture of my business layer is designed around the concept of business commands. Command classes (simple DTO with only data and no behavior) are defined and for each command there is a 'command handler' that contains the business logic to execute this command. Each command handler implements the generic ICommandHandler<TCommand> interface, where TCommand is the actual business command.
Consumers take a dependency on the ICommandHandler<TCommand> and create new command instances and use the injected handler to execute those commands. This looks like this:
public class Consumer
{
private ICommandHandler<CustomerMovedCommand> handler;
public Consumer(ICommandHandler<CustomerMovedCommand> h)
{
this.handler = h;
}
public void MoveCustomer(int customerId, Address address)
{
var command = new CustomerMovedCommand();
command.CustomerId = customerId;
command.NewAddress = address;
this.handler.Handle(command);
}
}
Now consumers only depend on a specific ICommandHandler<TCommand> and have no notion of the actual implementation (as it should be). However, although the Consumer should know nothing about the implementation, during development I (as a developer) am very much interested in the actual business logic that is executed, simply because development is done in vertical slices; meaning that I'm often working on both the UI and business logic of a simple feature. This means I'm often switching between business logic and UI logic.
So what I did was putting the command (in this example the CustomerMovedCommand and the implementation of ICommandHandler<CustomerMovedCommand>) in the same file, with the command first. Because the command itself is concrete (since its a DTO there is no reason to abstract it) jumping to the class is easy (F12 in Visual Studio). By placing the handler next to the command, jumping to the command means also jumping to the business logic.
Of course this only works when it is okay for the command and handler to be living in the same assembly. When your commands need to be deployed separately (for instance when reusing them in a client/server scenario), this will not work.
Of course this is just 45% of my business layer. Another big peace however (say 45%) are the queries and they are designed similarly, using a query class and a query handler. These two classes are also placed in the same file which -again- allows me to navigate quickly to the business logic.
Because the commands and queries are about 90% of my business layer, I can in most cases move very quickly from presentation layer to business layer and even navigate easily within the business layer.
I must say these are the only two cases that I place multiple classes in the same file, but makes navigation a lot easier.
If you want to learn more about how I designed this, I've written two articles about this:
Meanwhile... on the command side of my architecture
Meanwhile... on the query side of my architecture
In my opinion, loosely coupled code can help you much but I agree with you about the readability of it.
The real problem is that name of methods also should convey valuable information.
That is the Intention-Revealing Interface principle as stated by
Domain Driven Design ( http://domaindrivendesign.org/node/113 ).
You could rename get method:
// intention revealing name
Date cutoff = myExpiryCutoffDateService.calculateFromPayment();
I suggest you to read thoroughly about DDD principles and your code could turn much more readable and thus manageable.
I have found The Brain to be useful in development as a node mapping tool. If you write some scripts to parse your source into XML The Brain accepts, you could browse your system easily.
The secret sauce is to put guids in your code comments on each element you want to track, then the nodes in The Brain can be clicked to take you to that guid in your IDE.
Depending on how many developers are working on projects and whether you want to reuse some parts of it in different projects loose coupling can help you a lot. If your team is big and project needs to span several years, having loose coupling can help as work can be assigned to different groups of developers more easily. I use Spring/Java with lots of DI and Eclipse offers some graphs to display dependencies. Using F3 to open class under cursor helps a lot. As stated in previous posts, knowing shortcuts for your tool will help you.
One other thing to consider is creating custom classes or wrappers as they are more easily tracked than common classes that you already have (like Date).
If you use several modules or layer of application it can be a challenge to understand what a project flow is exactly, so you might need to create/use some custom tool to see how everything is related to each other. I have created this for myself, and it helped me to understand project structure more easily.
Documentation !
Yes, you named the major drawback of loose coupled code. And if you probably already realized that at the end, it will pay off, it's true that it will always be longer to find "where" to do your modifications, and you might have to open few files before finding "the right spot"...
But that's when something really important: the documentation. It's weird that no answer explicitly mentioned that, it's a MAJOR requirement in all big sized development.
API Documentation
An APIDoc with a good search feature. That each file and --almost-- each methods have a clear description.
"Big picture" documentation
I think it's good to have a wiki that explain the big picture. Bob have made a proxy system ? How doest it works ? Does it handle authentication ? What kind of component will use it ? Not a whole tutorial, but just a place when you can read 5 minutes, figure out what components are involved and how they are linked together.
I do agree with all the points of Mark Seemann answer, but when you get in a project for the first time(s), even if you understand well the principles behing decoupling, you'll either need a lot of guessing, or some sort of help to figure out where to implement a specific feature you want to develop.
... Again: APIDoc and a little developper Wiki.
I am astounded that nobody has written about the testability (in terms of unit testing of course) of the loose coupled code and the non-testability (in the same terms) of the tightly coupled design! It is no brainer which design you should choose. Today with all the Mock and Coverage frameworks it is obvious, well, at least for me.
Unless you do not do unit tests of your code or you think you do them but in fact you don't...
Testing in isolation can be barely achieved with tight coupling.
You think you have to navigate through all the dependencies from your IDE? Forget about it! It is the same situation as in case of compilation and runtime. Hardly any bug can be found during the compilation, you cannot be sure whether it works unless you test it, which means execute it. Want to know what is behind the interface? Put a breakpoint and run the goddamn application.
Amen.
...updated after the comment...
Not sure if it is going to serve you but in Eclipse there is something called hierarchy view. It shows you all the implementations of an interface within your project (not sure if the workspace as well). You can just navigate to the interface and press F4. Then it will show you all the concrete and abstract classes implementing the interface.

Good examples of OCP in open source libraries [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
There has been a lot of discussion on the subject of “Open Closed Principle” on stackoverflow. It seems however, that generally a more relaxed interpretation of the principle is prevalent, so for example the Eclipse is open for modification through plug-ins.
According to strict OCP, you should modify the original code only to fix bugs, not to add new behaviour.
Are there any good examples of strict interpretation of OCP in public or OS libraries, where you can observe evolution of a feature through OCP: there is a class Foo with method bar() and than there is a FooDoingAlsoX with foo2() method in the next version of the library, where original class has been extended where original code was not modified.
EDIT: According to Robert C. Martin: “The binary executable version of the module, whether a linkable library, a DLL, or a Java .jar remain untouched”*. I never see libraries kept closed, in practice new behaviour is added to a library and new version published. According to OCP, new behaviour belongs to new binary module.
*Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
The OCP principle says that a class shall be open for extension but closed for changes. The key to achieve this is abstraction. If you also read the DIP principle you'll find out that abstractions should not depend upon details, but details should depend upon abstractions. In your example you have details in your interface (two specific methods bar() and foo2()). To fully implement OCP you shall try to avoid such details (and for example try to move them behind the abstraction and instead have one general foo-method with different implementations).
For example take a look at this interface in SolrNet:
https://github.com/mausch/SolrNet/blob/master/SolrNet/ISolrCommand.cs
This is a general command that that only tell that a command can be executed, it doesn't give more details than that.
The details instead lies in the implementations of the interface:
https://github.com/mausch/SolrNet/tree/master/SolrNet/Commands
As you see you can add as many commands as you wish without changing the implementation of any other class. The specific implementations can hereby be considered closed for modifications, but the interface allow us to extend the functionality with new commands, and is hereby open for extension.
(SolrNet isn't extraordinarily in anyway, I just used examples from this project because I happen to have it in my browser when I read this post, almost all good coded OO projects make use of the OCP principle in one way or another)
EDIT: If you want examples of this on the binary level you can for example take a look at nopCommerce (http://nopcommerce.codeplex.com/releases/view/69081) where you for example can add your own shipping providers, payment providers or exchange rate providers without even touching the original DLL by implementing a set of interfaces. And again, it is not something extraordinarily with nopCommerce, it was just the first project that came into mind because I used it a couple of days ago ;)
OCP is not not a principle that shall only be used on binary level though, good OOD uses OCP, not everywhere, but in all levels where it is suitable ;) "Strict" OCP on the binary level is not always suitable and would add an extra level of complexity if you used it in every single situation, it is mostly interesting in situations when you want to change implementation in runtime or when you want to let external developers be able to extend your interfaces. You shall always keep the OCP principle in mind when you desing your interfaces, but you shall not see it as a law but a principle that shall be used in the correct situations.
I guess you refer to Agile Principles, Patterns and Practices when you quote Robert C Martin, if so, also read the conclusion in the same chapter where he says about the same thing as I did above. If you for example read his book Clean Code he gives a more gradate explanation of the OCP principle and I would say the quote above is a bit unfortunate since it can let people think that you shall always put new code in new DLL:s, JAR:s or libs when the truth is that you shall always consider the context.
I think your rather should take a look at Martins more up to date whitepaper about OCP http://objectmentor.com/resources/articles/ocp.pdf (which he also refer to in his later book Clean Code), there he never refer to separate binaries, rather he refer to "classes, modules, functions". I think this proves that Martin means not just binary extension when he speaks about OCP but also extensions of classes and functions, so binary extension is not more "strict" than the class extension in my first example.
I am not aware of really good examples but I think that there might be a reason for the more "relaxed interpretation" (for example here on SO):
To fully realize the OCP principle in a real world project you need to do the coupling via lean Interfaces (see ISP and DIP for this) and Dependency Injection (either property or constructor based)... otherwise you are really fast either stuck or need to resort to the "relaxed interpretation"...
some interesting links in this regard:
http://www.oodesign.com/open-close-principle.html
http://javaboutique.internet.com/tutorials/JavaOO/index2.html
http://joelabrahamsson.com/entry/the-open-closed-principle-a-real-world-example
http://joelabrahamsson.com/entry/simple-example-of-the-open-closed-principle
Background
In page 100 of PPP Robert Martin says
"Closed for modification"
Extending the behavior of a module does not result in changes to the source or binary code of the module. The binary executable version of the module, whether a linkable library, a DLL, or a Java .jar, remains untouched.
Also on page 103 he discusses an example, written in C, where a non-OCP design results in recompiling the existing classes:
So, not only must we change the source code of all witch/case statements or if/else chains, but we also must alter the binary files (via recompilation) of all the modules that use any of the Shape data structures. Changing the binary files means that any DLLs, shared libraries, or other kinds of binary components must be redeployed.
It's good to remember that this book was published in 2003 and many of the examples use C++, which is a language notorious for long compile times (unless header file dependencies are handled well - developers from Remedy mentioned in one presentation that Alan Wake's full build takes only about 2 minutes).
So when discussing binary compatibility in the small scale (i.e. within one project), one benefit of OCP (and DIP) is faster compile times, which is less of an issue with modern languages and machines. But in the large scale, when a library is used by many other projects, especially if their code is not in our control, the benefits of not having to release new versions of the software still apply.
Example
As an example of an open source library which follows OCP in binary compatibility, look at JUnit. There are tens of testing frameworks which rely on JUnit's #RunWith annotation and Runner interface, so that they can be run with the JUnit test runner - without having to change JUnit, Maven, IDEs etc.
Also JUnit's recently added #Rule annotation allows test writers to plug into standard JUnit tests custom behavior, which would before have required a custom test runner. Once more an example of library-level OCP.
To contrast, TestNG does not follow OCP, but contains JUnit specific checks to execute TestNG and JUnit tests differently. A representative line can be found from the TestRunner.run() method:
if(test.isJUnit()) {
privateRunJUnit(test);
}
else {
privateRun(test);
}
As a result, even tough the TestNG test runner has in some aspects more features (for example is supports running tests in parallel), other testing frameworks do not use it, because it's not extensible to support other testing frameworks without modifying TestNG. (TestNG has a way to plug in custom test runners using the -testrunfactory argument, but AFAIK it allows only one type of runner per suite. So it would not be possible to use many different testing frameworks in one project, unlike with JUnit.)
Conclusion
However, in most situations OCP is used within an application or library, in which case both the base module and its extensions are packaged inside the same binary. In that situation OCP is used to improve the maintainablity of the source code, and not to avoid redeploys and new releases. The possible benefit of not having to recompile an unchanged file is still there, but since compile times are so low with most modern languages, that's not very important.
The thing to always keep in mind is that following OCP is expensive, as it makes the system more complex. Robert Martin talks about this on PPP page 105 and the conclusion of the chapter. OCP should be applied carefully, for only the most probable changes. You should not preemptively put in the hooks to follow OCP, but you should put in the hooks only after a change happens that needs them. Thus it is unlikely to find a project where all new features would have been added without changing existing classes - unless somebody does it as an academic exercise (my intuition says that it would be very hard and the resulting code would not be clean).

How do you write good highly useful general purpose libraries?

I asked this question about Microsoft .NET Libraries and the complexity of its source code. From what I'm reading, writing general purpose libraries and writing applications can be two different things. When writing libraries, you have to think about the client who could literally be everyone (supposing I release the library for use in the general public).
What kind of practices or theories or techniques are useful when learning to write libraries? Where do you learn to write code like the one in the .NET library? This looks like a "black art" which I don't know too much about.
That's a pretty subjective question, but here's on objective answer. The Framework Design Guidelines book (be sure to get the 2nd edition) is a very good book about how to write effective class libraries. The content is very good and the often dissenting annotations are thought-provoking. Every shop should have a copy of this book available.
You definitely need to watch Josh Bloch in his presentation How to Design a Good API & Why it Matters (1h 9m long). He is a Java guru but library design and object orientation are universal.
One piece of advice often ignored by library authors is to internalize costs. If something is hard to do, the library should do it. Too often I've seen the authors of a library push something hard onto the consumers of the API rather than solving it themselves. Instead, look for the hardest things and make sure the library does them or at least makes them very easy.
I will be paraphrasing from Effective C++ by Scott Meyers, which I have found to be the best advice I got:
Adhere to the principle of least astonishment: strive to provide classes whose operators and functions have a natural syntax and an intuitive semantics. Preserve consistency with the behavior of the built-in types: when in doubt, do as the ints do.
Recognize that anything somebody can do, they will do. They'll throw exceptions, they'll assign objects to themselves, they'll use objects before giving them values, they'll give objects values and never use them, they'll give them huge values, they'll give them tiny values, they'll give them null values. In general, if it will compile, somebody will do it. As a result, make your classes easy to use correctly and hard to use incorrectly. Accept that clients will make mistakes, and design your classes so you can prevent, detect, or correct such errors.
Strive for portable code. It's not much harder to write portable programs than to write unportable ones, and only rarely will the difference in performance be significant enough to justify unportable constructs.
Even programs designed for custom hardware often end up being ported, because stock hardware generally achieves an equivalent level of performance within a few years. Writing portable code allows you to switch platforms easily, to enlarge your client base, and to brag about supporting open systems. It also makes it easier to recover if you bet wrong in the operating system sweepstakes.
Design your code so that when changes are necessary, the impact is localized. Encapsulate as much as you can; make implementation details private.
Edit: I just noticed I very nearly duplicated what cherouvim had posted; sorry about that! But turns out we're linking to different speeches by Bloch, even if the subject is exactly the same. (cherouvim linked to a December 2005 talk, I to January 2007 one.) Well, I'll leave this answer here — you're probably best off by watching both and seeing how his message and way of presenting it has evolved :)
FWIW, I'd like to point to this Google Tech Talk by Joshua Bloch, who is a greatly respected guy in the Java world, and someone who has given speeches and written extensively on API design. (Oh, and designed some exceptionally good general purpose libraries, like the Java Collections Framework!)
Joshua Bloch, Google Tech Talks, January 24, 2007:
"How To Design A Good API and Why it
Matters" (the video is about 1 hour long)
You can also read many of the same ideas in his article Bumper-Sticker API Design (but I still recommend watching the presentation!)
(Seeing you come from the .NET side, I hope you don't let his Java background get in the way too much :-) This really is not Java-specific for the most part.)
Edit: Here's another 1½ minute bit of wisdom by Josh Bloch on why writing libraries is hard, and why it's still worth putting effort in it (economies of scale) — in a response to a question wondering, basically, "how hard can it be". (Part of a presentation about the Google Collections library, which is also totally worth watching, but more Java-centric.)
Krzysztof Cwalina's blog is a good starting place. His book, Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries, is probably the definitive work for .NET library design best practices.
http://blogs.msdn.com/kcwalina/
The number one rule is to treat API design just like UI design: gather information about how your users really use your UI/API, what they find helpful and what gets in their way. Use that information to improve the design. Start with users who can put up with API churn and gradually stabilize the API as it matures.
I wrote a few notes about what I've learned about API design here: http://www.natpryce.com/articles/000732.html
I'd start looking more into design patterns. You'll probably not going to find much use for some of them, but as you get deeper into your library design the patterns will become more applicable. I'd also pick up a copy of NDepend - a great code measuring utility which may help you decouple things better. You can use .NET libraries as an example, but, personally, i don't find them to be great design examples mostly due to their complexities. Also, start looking at some open source projects to see how they're layered and structured.
A couple of separate points:
The .NET Framework isn't a class library. It's a Framework. It's a set of types meant to not only provide functionality, but to be extended by your own code. For instance, it does provide you with the Stream abstract class, and with concrete implementations like the NetworkStream class, but it also provides you the WebRequest class and the means to extend it, so that WebRequest.Create("myschema://host/more") can produce an instance of your own class deriving from WebRequest, which can have its own GetResponse method returning its own class derived from WebResponse, such that calling GetResponseStream will return your own class derived from Stream!
And your callers will not need to know this is going on behind the scenes!
A separate point is that for most developers, creating a reusable library is not, and should not be the goal. The goal should be to write the code necessary to meet requirements. In the process, reusable code may be found. In that case, it should be refactored out into a separate library, where it can be reused in the future.
I go further than that (when permitted). I will usually wait until I find two pieces of code that actually do the same thing, or which overlap. Presumably both pieces of code have passed all their unit tests. I will then factor out the common code into a separate class library and run all the unit tests again. Assuming that they still pass, I've begun the creation of some reusable code that works (since the unit tests still pass).
This is in contrast to a lesson I learned in school, when the result of an entire project was a beautiful reusable library - with no code to reuse it.
(Of course, I'm sure it would have worked if any code had used it...)

What is Aspect Oriented Programming? [duplicate]

This question already has answers here:
Closed 14 years ago.
Duplicate:
What is aspect-oriented programming?
Every time I here a podcast or read a blog entry about it, even here, they make it sound like string theory or something. Is the best way to describe it OOP with Dependency Injection on steroids?
Every time someone tries to explain it, it’s like, Aspects, [Adults from Peanuts Cartoon sound], Orthogonal, [more noise], cross cutting concerns, etc. Seriously, can anyone describe it in layman’s terms.
Laymans terms so let me give you an example. Let's say you have a web application, and you need to add error logging / auditing. One implementation would be to go into every public method and add your try catch blocks etc...
Well Aspect oriented says hogwash with that, let me inject my method around your method so for example instead of calling YourClass.UpdateModel(), the system might call,
LoggingHandler.CallMethod() this method might then redirect the call to UpdateModel but wraps it in a try catch block to handle logging errors.
Now the trick is that this redirection happens automagically, through configuration or by applying attributes to methods.
This works for as you said cross cutting things which are very common programing elements that exist in every domain, such as: Logging, Auditing, Transaction Mgmt, Authorization.
The idea behind it is to remove all this common plumbing code out of your business / app tier so you can focus on solving the problem not worrying about logging this method call or that method call.
Class and method attributes in .NET are a form of aspect-oriented programming. You decorate your classes/methods with attributes. Behind the scenes this adds code to your class/method that performs the particular functions of the attribute. For example, marking a class serializable allows it to be serialized automatically for storage or transmission to another system. Other attributes might mark certain properties as non-serializable and these would be automatically omitted from the serialized object. Serialization is an aspect, implemented by other code in the system, and applied to your class by the application of a "configuration" attribute (decoration) .
AOP is all about managing the common functionality (which span across the application, hence cross cutting) within the application such that it is not embedded within the business logic.
Examples to such cross cutting concerns are logging, managing security, transaction management etc.
Frameworks allows this to managed automatically with the help of some configuration files.
I currently use Post Sharp, i would read the info from their website. I use it to provide a security around method calls.
"PostSharp is an open platform for the analysis and transformation of .NET assemblies. It comes with PostSharp Laos, a powerful yet simple plug-in that let you develop custom attributes that actually adds behavior of your code. PostSharp Laos is the leading aspect-oriented programming (AOP) solution for the .NET Framework."
The classic examples are security and logging. Instead of writing code within your application to log occurance of x or check object z for security access control there is a language contraption "out of band" of normal code which can systematically inject security or logging into routines that don't nativly have them in such a way that even though your code doesn't supply it -- its taken care of.
A more concrete example is the operating system providing access controls to a file. A software program does not need to check for access restrictions because the underlying system does that work for it.
If you think you need AOP in my experience you actually really need to be investing more time and effort into appropriate meta-data management within your system with a focus on well thought structural / systems design.