iOS Design Pattern equivalents when coming from a C#/Java world? - objective-c

So I'm new to iOS development and am doing all I can to learn the "best" way to do things. (Yes I know that's a relative term)
I'm coming from a world of C# and Java where we do things like injecting dependancies via an IOC container, use a repository pattern to abstract data access, use domain services and objects to encapsulate business data and behavior, etc. These are things I have yet to see in iOS development. (Maybe I'm looking in the wrong places)
I realize that Objective-C is a superset of C and a dynamic/loosely-typed language which will probably change the game quite a bit when it comes to good designs practices. Can anyone point me in the directions of some books/blogs/other that would help me make this mental leap from a strongly typed, managed environment to this new world while keeping my designs supple and abiding by the SOLID principles?
EDIT - I want to be clear here. I am not asking how to learn the Cocoa framework and the ins and outs of Objective-C as a language. I have found plenty of resources on that. I'm looking to take this to the next level, begin doing TDD and make sure the projects I'm building will be easy to extend and maintain.

The best way to learn the "best" way to do things is by gaining as deep an understanding as possible of how Apple's existing APIs are designed. After all, regardless of what the theoretical best way to do something is, ultimately your code is going to have to work with these APIs, so it makes sense that you should follow similar patterns in most cases.
As far as books go, there is one called Cocoa Design Patterns that covers this exact subject and, based on Amazon review, seems to be well received.

There is a plenty of design patterns specific to the Objective-C and Cocoa development, you will find a very nice summary in this book. It covers language built-in patterns as well as some more complicated and high-level (architectural if you like) patterns.
In general, SOLID principles apply and are no less relevant in a dynamic language as they are in a strongly typed one. The other thing is that in Objective-C you have the option of using strong types in your design so pretty much all "classical" OOP patterns apply, although there may be some framework / language constructs more elegant and suitable for the job.

I recommend you follow the cs193p course and read Learning Objective-C from Apple's website.
I think these two links will point you to the right direction as to how things should work on iOS.

Related

Where should I go next?

Well, I think I have enough knowledge of cocoa that I can go learn another thing. What would you recommend learning after learning cocoa? (Ex. Core animation, OpenCL, core data)
I really depends what your goals are. If you want to stick to Objective-C, dive into Cocoa Touch if you haven't already. I'd also suggest checking out frameworks such as MacRuby if you want to stick to developing Mac apps, it is pretty sweet.
If you want to go somewhere totally different, I've been messing around with Rails and Android a lot recently.
Learn LISP.
It is fundamentally different from pretty much every other programming language there is. It will force you to think of problems in new ways. Even if you never ever will use LISP in a real world project (I never did) you will become a much better programmer.
Anyone who wants to call themselves a programmer should known about (spent at least a full week with):
C - to know the heavy lifting and how it actually works.
LISP - to understand functional programming.
Smalltalk/Objective-C - to understand real object oriented programming.
Prolog - to understand logic programming.
C++, and any language that derives its OOP design from it, is just C structs with function pointers. Yes Java and C# I look at you too.
Learning PostScript is a good way to broaden your understanding of the drawing model also used by Quartz and AppKit, and can be useful for prototyping your drawing code.
Learn another language. Maybe C/C++ since they are similar. Or maybe C#. Or you can try something completely different such as python, pascal, D or VB.
depends what you aim for
but if you are not strong in C/C++ I would suggest that. A) its what cocoa is based on B) if you want to port your code to other platforms, usually you'll have a good chance of reusing directly the C/C++ withou a lot of changes.
(Ex. Core animation, OpenCL, core
data)
those are just tools, if you want to specialize in iphone then its good practice to look up the various feature, look at the examples and then implement a little example for yourself.
otherwise if you have no precise goal you can also go to the bookstore and pick a random book ^^

How do you write good highly useful general purpose libraries?

I asked this question about Microsoft .NET Libraries and the complexity of its source code. From what I'm reading, writing general purpose libraries and writing applications can be two different things. When writing libraries, you have to think about the client who could literally be everyone (supposing I release the library for use in the general public).
What kind of practices or theories or techniques are useful when learning to write libraries? Where do you learn to write code like the one in the .NET library? This looks like a "black art" which I don't know too much about.
That's a pretty subjective question, but here's on objective answer. The Framework Design Guidelines book (be sure to get the 2nd edition) is a very good book about how to write effective class libraries. The content is very good and the often dissenting annotations are thought-provoking. Every shop should have a copy of this book available.
You definitely need to watch Josh Bloch in his presentation How to Design a Good API & Why it Matters (1h 9m long). He is a Java guru but library design and object orientation are universal.
One piece of advice often ignored by library authors is to internalize costs. If something is hard to do, the library should do it. Too often I've seen the authors of a library push something hard onto the consumers of the API rather than solving it themselves. Instead, look for the hardest things and make sure the library does them or at least makes them very easy.
I will be paraphrasing from Effective C++ by Scott Meyers, which I have found to be the best advice I got:
Adhere to the principle of least astonishment: strive to provide classes whose operators and functions have a natural syntax and an intuitive semantics. Preserve consistency with the behavior of the built-in types: when in doubt, do as the ints do.
Recognize that anything somebody can do, they will do. They'll throw exceptions, they'll assign objects to themselves, they'll use objects before giving them values, they'll give objects values and never use them, they'll give them huge values, they'll give them tiny values, they'll give them null values. In general, if it will compile, somebody will do it. As a result, make your classes easy to use correctly and hard to use incorrectly. Accept that clients will make mistakes, and design your classes so you can prevent, detect, or correct such errors.
Strive for portable code. It's not much harder to write portable programs than to write unportable ones, and only rarely will the difference in performance be significant enough to justify unportable constructs.
Even programs designed for custom hardware often end up being ported, because stock hardware generally achieves an equivalent level of performance within a few years. Writing portable code allows you to switch platforms easily, to enlarge your client base, and to brag about supporting open systems. It also makes it easier to recover if you bet wrong in the operating system sweepstakes.
Design your code so that when changes are necessary, the impact is localized. Encapsulate as much as you can; make implementation details private.
Edit: I just noticed I very nearly duplicated what cherouvim had posted; sorry about that! But turns out we're linking to different speeches by Bloch, even if the subject is exactly the same. (cherouvim linked to a December 2005 talk, I to January 2007 one.) Well, I'll leave this answer here — you're probably best off by watching both and seeing how his message and way of presenting it has evolved :)
FWIW, I'd like to point to this Google Tech Talk by Joshua Bloch, who is a greatly respected guy in the Java world, and someone who has given speeches and written extensively on API design. (Oh, and designed some exceptionally good general purpose libraries, like the Java Collections Framework!)
Joshua Bloch, Google Tech Talks, January 24, 2007:
"How To Design A Good API and Why it
Matters" (the video is about 1 hour long)
You can also read many of the same ideas in his article Bumper-Sticker API Design (but I still recommend watching the presentation!)
(Seeing you come from the .NET side, I hope you don't let his Java background get in the way too much :-) This really is not Java-specific for the most part.)
Edit: Here's another 1½ minute bit of wisdom by Josh Bloch on why writing libraries is hard, and why it's still worth putting effort in it (economies of scale) — in a response to a question wondering, basically, "how hard can it be". (Part of a presentation about the Google Collections library, which is also totally worth watching, but more Java-centric.)
Krzysztof Cwalina's blog is a good starting place. His book, Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries, is probably the definitive work for .NET library design best practices.
http://blogs.msdn.com/kcwalina/
The number one rule is to treat API design just like UI design: gather information about how your users really use your UI/API, what they find helpful and what gets in their way. Use that information to improve the design. Start with users who can put up with API churn and gradually stabilize the API as it matures.
I wrote a few notes about what I've learned about API design here: http://www.natpryce.com/articles/000732.html
I'd start looking more into design patterns. You'll probably not going to find much use for some of them, but as you get deeper into your library design the patterns will become more applicable. I'd also pick up a copy of NDepend - a great code measuring utility which may help you decouple things better. You can use .NET libraries as an example, but, personally, i don't find them to be great design examples mostly due to their complexities. Also, start looking at some open source projects to see how they're layered and structured.
A couple of separate points:
The .NET Framework isn't a class library. It's a Framework. It's a set of types meant to not only provide functionality, but to be extended by your own code. For instance, it does provide you with the Stream abstract class, and with concrete implementations like the NetworkStream class, but it also provides you the WebRequest class and the means to extend it, so that WebRequest.Create("myschema://host/more") can produce an instance of your own class deriving from WebRequest, which can have its own GetResponse method returning its own class derived from WebResponse, such that calling GetResponseStream will return your own class derived from Stream!
And your callers will not need to know this is going on behind the scenes!
A separate point is that for most developers, creating a reusable library is not, and should not be the goal. The goal should be to write the code necessary to meet requirements. In the process, reusable code may be found. In that case, it should be refactored out into a separate library, where it can be reused in the future.
I go further than that (when permitted). I will usually wait until I find two pieces of code that actually do the same thing, or which overlap. Presumably both pieces of code have passed all their unit tests. I will then factor out the common code into a separate class library and run all the unit tests again. Assuming that they still pass, I've begun the creation of some reusable code that works (since the unit tests still pass).
This is in contrast to a lesson I learned in school, when the result of an entire project was a beautiful reusable library - with no code to reuse it.
(Of course, I'm sure it would have worked if any code had used it...)

Which is easier for beginners: RubyCocoa or ObjC/Cocoa

I've heard a few debates in the past over which is more mature: RubyCocoa or Obj-C/Cocoa... but I have felt that the answers jet right over the "newbie" that would truly appreciate an answer.
So the question is: for a total beginner, with little-to-no programming experience, is it easier to learn Ruby and explore Cocoa via the bridge (then possibly tackle Obj-C) or to straight up jump right into Objective-C and Cocoa.
Both communities are strong and have a plethora of resources, but as many people have pointed out the syntax of Obj-C is just daunting. Perhaps for a true beginner it would be easier to learn Ruby then tackle Objective-C?
Update: I apologize, but when I said "learn Ruby and explore Cocoa" I did not mean to learn programming via RubyCocoa, but rather to learn Ruby (and once one feels confident enough) begin to explore Cocoa with the possibility of leveraging their growing skill-set to tackle Obj-C.
I would not recommend learning to program with RubyCocoa.
I love Ruby and think it's a great language to learn programming, but the RubyCocoa bridge isn't documented well enough that I'd recommend it as a learning environment. You'd be learning general programming concepts, the Cocoa frameworks and the quirks of RubyCocoa all at the same time. That's a lot of stuff to shove into your head.
If you're bound and determined to start with Cocoa, start by learning Objective-C. Otherwise, you could learn Ruby to begin with and then transition to Objective-C once you feel a little more comfortable as a programmer. And once you've done all that, you can use RubyCocoa, but then you'll know enough that it won't make you go crazy.
I would start with what Apple is preaching: Objective-C/Cocoa
Writing Code is the Easy Part
No point really in:
Try to see which is easy, if you are opting for the easy way, you will always get it wrong.. fear not what is hard, for hard is to fear not.
Trying to compare languages/approaches that way, as per the post I added, thinking what's easy/nice/hard in this case is a question of syntax, which ultimately boils down to interpretation, beauty is in the eye of the beholder.
Depending on what they will ultimately attempt to do with the technology you will find somethings are "easier" than others in one approach or the other..easy is a hard thing to define
Only one question matters: What does one know before being exposed to any of the two approaches? -- you said:
for a total beginner, with little-to-no programming experience
My answer:
Often near where I live tourists ask:
"How do I get to placename X from here?"
People here usually answer:
"If I were you, and I was trying to get to placename X, I wouldn't start from here.."
So.. the answer to your question is:
Neither
Total beginners should always study the basics of programming as per what #Tafkas said.. (not necessarily needing to study OOP languages.. but programming...) before making any kind of decision on what to study and/or implement. (This + requirements gathering)
..Otherwise the people learning these language/technology skills will be just another set of script kiddies on their way to becoming copy paste code monkeys
The problem with starting with RubyCocoa is that you end up learning both ruby and cocoa and the interaction between the two at the same time. I would say learn ruby, or learn Objective-C/Cocoa. Jumping right into RubyCocoa is going to throw you off.
It sounds like your goal is to learn to program in Cocoa, using either Ruby or Objective-C as the language.
While I've never used Ruby (or RubyCocoa, for that matter), my understanding is that Cocoa is written with Objective-C as the primary language, and the bridges (Python and Ruby) come in second. While they generally work well, there are some rough edges that aren't there when using Cocoa from Objective-C.
I would say that you should go the Objective-C/Cocoa route. It might not be bad to start with another language first - C if you want to learn it (which would be useful, since Objective-C is a superset of C), or something like Java if you want to go the OO route.
That's not to say that RubyCocoa doesn't work or isn't useful. It's great for what it does, but I don't think that it is the place to start with Cocoa programming.
I would suggest that if your going to be serious about writing Applications for OSX and/or the iPhone I would highly suggest you get your feet wet with Objective-C and Cocoa.
The reason are simple:
The Documentation from Apple on Objective-C is excellent.
You're going to get more help from the community here at Stack Overflow because there are more Objective-C/Cocoa developers than RubyCocoa. (from what I've seen so far).
The Objective-C Developers are very good at helping each other out and I could not find a better group of developers with open arms to new people learning the language.
Great Developer books are available as well as outside training if you want it.
The big one that I see is that you can NOT develop for the iPhone using RubyCocoa. But if you learn Objective-C/Cocoa you can pretty much dive right into Cocoa Touch.
There is no guarantee that Apple will keep RubyCocoa updated as much as they do with Cocoa.
Don't get me wrong, learning Ruby is a great language and I don't think you can go wrong learning it. However right now if you have the option right now to learn either or, go with Objective-C/Cocoa.
I think you're in for a harder road by going with RubyCocoa if you want to build serious applications for either OS X or especially later for the iPhone.
The main reason is simply being able to find answers to common programming questions that you might have. There is a burgeoning community around Objective-C/Cocoa with a lot of forum support, code snippets, samples, etc. It will be infinitely easier for you to rapidly learn how to you use Cocoa if you understand Objective-C. It will also be easier for your peers to troubleshoot your code and help you out when you get stuck. Objective-C is really not that hard to learn, especially if you have some kind of grounding in OO concepts.
I would suggest to start with an oo language such as c++ or java. After understanding the basic comcepts of oo it should not be to hard to learn objective-c.
The problem with ruby cocoa is that there is no guarantee apple will support this in the future. They have dropped the cocoa-java bridge in the past.
I think this depends on how much the beginner already knows. If you already know object oriented programming, you should definitely learn objective-c. Thinking in paradigms like messaging, delegation, and categories will help a lot to understand the Cocoa system. If you've got a few languages under your belt, but no object oriented programming, then you probably also have enough experience to learn OOP through Cocoa, but understand that it handles things differently from languages like C++. If you have very little programming experience, then ruby may be better in the short term.
One other advantage of objective-c to keep in mind is the manual reference counting memory management paradigm. It can be much easier to deal with than malloc/free, but it doesn't allow the laziness that Java and scripting languages engender.

OOP concepts confusion?

While reading some programming books, I notice that the authors says that, in OOP, you may have some confusion while understanding the main idea of OOP.
And hell yeah!. I had some confusion. Did you have the same and what makes this confusion to programmers(even experienced programmers)?!
And if you had it, how could you beat this ?!
Thanks
The Animal trope works when explaining it to most people.
(Further useful links here and here)
A lot of the confusion when learning OOP comes from trying to pick the right relationship between objects and classes of objects, particularly whether:
Object contains Some other Object (or Object1 has an Object2)
Object is an instance of Class
If I can think of a good example that shows a case where either might be appropriate, I'll add it...
OOP takes a "problem oriented" approach to programming as opposed to the traditional "machine oriented" approach used in languages like C and Pascal. Learning OOP can be quite tough if you've programmed extensively in procedural/functional languages. It is to these programmers that things tend to be more confusing. If you are new to programming, you'll probably find things a lot less confusing since you're starting off with a fresh mind.
Having said that, I've seen many programmers who've worked extensively with languages like Java and claim to be good OOP programmers when they were actually far from it. Sure they use Java language features like interfaces, inheritance etc, and create objects "which are instances of classes", and "send a message to an object". Most people use a lot of OOP jargon because they are exposed to it. But when it comes down to writing a simple application their resulting code exposes their poor understanding.
My advise to you is don't get caught in using jargon alone. Question and learn the underlying concepts diligently. You might have your first semi-nirvana (like I did) when you learn polymorphism and the benefits it brings to code re-usability. Another semi-nirvana when you understand the trade-offs between reuse via inheritance and reuse via composition. In the end, you will know that you've understood OOP well if you able to design well, or rather, a good OO design is easily a good measure of how well you understand OOP.
If you are serious about OOP, you should read the first two chapters of the GOF book on Design Patterns. It might be a little tough on new programmers, but it lays the crux of the thinking behind OOP. This book is an important reference which any serious OOP programmer should have. If you understand the concepts in this book well, consider yourself to be a good OO programmer.
Yes, I experienced a bit of confusion initially. This was back in the day when OO was just starting to become mainstream, so there were a lot of books out there which covered it, but didn't explain it well for people who didn't already know what it was. As a result, I started out thinking that an object and a class were largely interchangeable and defining a new class for each object I wanted to create.
I finally "got it" by playing around on LambdaMOO, a MUD (think World of Warcraft, but with no graphics) with an object-oriented in-game programming language. Ironically, MOOCode makes no distinction between classes and objects - objects inherit directly from other objects. (It did have a convention of objects intended for use as "base classes" to be named "Generic Foo" as a way to distinguish them from specific ("instance") Foos, but that's as close to a class/object distinction as it had.)
Indeed, I think way too much emphasis is put on the 'class' concept.
The biggest leap forward in my understanding was when reading about the "Tell, don't Ask" principle.
I only started 'feeling' Object Orientation when playing around with (and reading about) duck-typed environments like Ruby, JavaScript, Python etc... after like 8 years happily creating truckloads of classes in C++.
Statically typed languages are great for production code, but you pay a lot of overhead when trying to get a feeling for Object Orientation.
Also, next to the commonly used term OOP, often one forgets that first comes OOA and OOD.
I think that especially programmers, that were experienced in the development with function-oriented languages, had trouble understanding the concepts of OOP. At least, it was really confusing to me and i did a whole bunch of things to program functional while using an OOP language(Java).
But i also think that the OOP approach is a great thing to beginners because this approach ist very "natural".
I never really had any confusion but I learned programming along the time-axis it evolved. so I had assembly, c, c++, java, c# and loads of other which is not relevant here. What you have to embrace is that everything shall be expressed by an object and an object contains information describing itself (properties) and that they can perform tasks related to them (methods i.E.: Car.GetAllCars();).
For inheritance and polymorphism and all the rest I recommend practice. practice everything - since practice makes perfect. Try to develop the examples given in all the books.
Once you understand the oo basics take a look at design patterns and design principles (e.g. by reading Head First Design Patterns). It will teach you how you should actually use the tools that oo gives you. While this is no substitute for practical experience it can certainly speed up the learning process.
Thank you for your answers.
I think giving examples works best but not every time, right ?!
I heared the creator of C++ when he said, it takes time and patience and you will understand it better by trying.

How to develop *real life* oop skills?

I've been studying OOP for quite a while now and I have a good grasp of the theory. I read the Head First book on OOP and, while it reinforced a lot of the theory, I found the case studies to be somewhat trivial.
I find that I'm applying OOP principles to my code each day, but I'm not sure if I'm applying them correctly. I need to get to the point where I am able to look at my code and know whether I'm using inheritance appropriately, whether my object is cohesive enough, etc.
Does anyone have any good recommendations (books, online guides, blogs, walk-throughs, etc.) for taking the next step in developing solid OOP skills?
I am working primarily in .NET (visual basic), but I welcome suggestions that incorporate various platforms.
Read Refactoring by Martin Fowler, and apply it to your own work.
It will take you through a litany of malodorous characteristics of software code that describe how to detect improperly constructed classes, and even more importantly, how to fix them.
Consider looking into Design Patterns. Although it seems like they aren't commonly used in enterprise applications (I've seen them more commonly used in API's and Frameworks than embedded into enterprise code), they could be applied to make software simpler or more robust in a lot of situations if only developers knew how to apply them.
The key is to understand the design patterns first, then with experience you'll learn how to apply them.
There is a Head First book on design patterns that teaches the concept pretty simply, although if you want a book that really covers design patterns in detail, check out the Gang of Four design patterns book, which is basically what made design patterns mainstream and is referred to almost every time the topic is brought up.
Design patterns can be applied in pretty much any object-oriented language to some degree or another, although some patterns can be overkill or over engineering in some cases.
EDIT:
I also want to add, you should check out the book Code Complete 2. It's a very influential book in the world of software development. It covers a lot of different concepts and theories. I learn something new every time I read it. It's such a good book that if I read it every 6 months to a year, I look at it from a different perspective that makes me a better programmer just by re-reading it. No matter how much you might think you know, this book will make you realize just how little you really know. It's really a great book. I can't stress how much you should own this book.
If you already have the basics, I believe only experience will get you further. You say you are not sure if you are applying the principles correctly, but there is no one correct way. Code you write today, you'll look at in 6 months time, and wonder why you wrote it that way, and probably know of a better, cleaner way of doing it. I also guarantee that after 10 years, you'll still be learning new techniques and tricks. Don't worry too much about it, it will come, just read as much as you can, and try and apply what you read in small chunks.
I am currently half-way through the following book:
http://www.amazon.com/Applying-UML-Patterns-Introduction-Object-Oriented/dp/0131489062
I cannot recommend this book strongly enough in terms of learning a real-life, professional-grade, practical approach to drafting and applying a well-formed and iterative design strategy before diving into code.
I, too, read the "Head First" book and felt that I was much better off for having read it.
After having a few years of working-world experience, I now view the Craig Larman book that I am recommending to be a perfect "next step" for me.
About the Presence of "UML" in this Book Title:
Whether you have positive feelings or negative feelings about UML notation, please do not let that influence your decision to buy the book (ISBN 0131489062) in either direction.
The prominence of "UML" in the title is misleading. While the author does use and explain UML notation, these explanations are extremely well-woven into relevant design discussions, and at no time does this book read like a boring UML spec.
In fact, here is a quote taken directly from the book:
What's important is knowing how to think and design in objects, which is a very different and much more valuable skill than knowing UML notation. While drawing a diagram, we need to answer key questions: What are the responsibilities of the object? Who does it collaborate with? What design patterns should be applied? Far more important than knowing the difference between UML 1.4 and 2.0 !
This book at times seems like it is "speaking to" a lead architect or a project manager. What I mean to say by that is that it assumes that the reader has significant control over the planning and direction of a software project.
Nonetheless, even if you are only responsible for some very small piece of your company's projects and products, I would still recommend this book and encourage you to apply some "scaled down" modifications of the book's advice to your piece of the project.
My OOP epiphany came from Grady Booch's book, way long time ago. Suddenly I realized why objects were good.
While polymorphism is cool, encapsulation is 75% of why objects are cool. It is sort of like an interface: you see the buttons but not the wiring. Before objects, only the most disciplined coders kept their grubby fingers off the internal bits of other people's procedures (it was called "structured programming").
Object make it easy to Do the Right Thing. Inheritance and polymorphism are little bonuses.
One way to learn about objects is to read other peoples' code. I learned a lot by reading the source code for the Delphi VCL framework. Even just looking at the documentation for Java will help you see what a single object class should do and how it is designed to be used by other objects.
Start a project of your own and pay attention when you want to sub-class your own classes and find that you have to go back and break up some protected methods so you can override just one piece of a process instead of replacing all of it. See how ancestors talk to descendants by calling abstract functions. In other words, go make a lot of mistakes and learn from them.
Enjoy!
Frankly, re-reading old David Parnas papers on information hiding helps me get in the right state of mind. The case studies may not be directly applicable but you should be able to get some useful generalizations out of them.
My epiphany happened when I tried to implement a very OO problem (dynamically and recursively building SQL statements) in VB6. The best way to understand polymorphism or inheritance is to need it and not be able to use it.
One thing that will definitely help you is working on a well-known, respected open source project. Either dig through the source code and see how things are done or try to make some additions / modifications. You'll find that there isn't one style or one right answer for most problems, but by looking at several projects, you'll be able to get a wide view of how things can be done. From there, you'll begin to develop your own style and will hopefully make some contributions to open source in the process.
I think you have to attempt and fail at implementing OO solutions. That's how I did it anyway. What I mean by fail is that you end up writing smelly code while successfully delivering a working solution. After it's written you'll get a feel for where things didn't quite feel right. You may have some epiphanies, and/or you may go and hunt for a slicker solution from other programmers. Undoubtedly you'll implement some variation of standard design patterns by accident. In hindsight, a light will click on (oh! so that's what a visitor is for), and then understanding will accelerate.
As others have said, I think tooling through some good OO open source code is a good idea. So is working with more experienced programmers who would be willing to critique your work. However understanding comes through doing.
You might want to try to read (and write) some Smalltalk for a while. Squeak is a free implementation that can show you the power of a fully object-oriented environment (unlike java or .net). All library code source is included. The language itself is incredibly simple. You'll find that java and c# are slowly adding the features well-known to Smalltalk since 1980.
Tortoise HG is extrodanarily well designed piece of OO open source software (written in Python).
If you already understand the basics, building something from scratch in a fully object oriented language will be a good step in fully understanding OOP software architecture. If you don't know Python, Python Essential Reference will take you through the language in full in a few days to a week.
After you understand the language take a look through the software above and you'll have all sorts of epiphanies.
To understand basically anything thoroughly, you need to have a decent knowledge of at least one abstraction level above and one level below it. In the case of OO, others have mentioned design patterns as the layer above OO. This helps a lot to illustrate why OO is useful.
As far as the layer below OO, try to play around with higher-order functions/late binding for a while and get a feel for how these relatively simple constructs are used. Also, try to understand how OO is implemented under the hood (vtables, etc.) and how it can be done in pure C. Once you grok the value of using higher order functions and late binding, you'll quickly realize that OO is just a convenient syntax for passing around a set of related functions and the data they operate on.