We know things like buttons and panels are common to most languages, but are there any officially recognized and used recommendations or patterns for GUI APIs? (say, something from W3C maybe?)
I don't think there is a collection of what you are looking for. But in each of the frameworks you will probably find patterns that are applicable. Some might be specific to the runtime architecture that the API is built for but some are fairly general like MVC, Delegation or Decorator. Here are some links I know to appropriate architecture documents
Apple, Cocoa
Qt, General, Model-View Programming
Java, Swing (I don't know how old this is, haven't done anything using swing lately)
I have used a lot GUI frameworks over the last years and I still like the architecture that went into cocoa, it is one frameworks that implements guis in a very object oriented way. Most of the paradigms are applied consistently and repeatedly so that usually if you have figured out a way how to do things it will usually carry over to somewhere else.
I doesn't think there is any official and formal patterns from standard bodies. But there is a UI pattern catalog here. Yahoo also maintains a UI pattern library.
UI design is imho an art -- there's some "do this, don't do that", but not much.
Nonetheless designing-a-gui on SO
has some practical advice.
"Officially recognized": try Microsoft ?
Related
So I'm new to iOS development and am doing all I can to learn the "best" way to do things. (Yes I know that's a relative term)
I'm coming from a world of C# and Java where we do things like injecting dependancies via an IOC container, use a repository pattern to abstract data access, use domain services and objects to encapsulate business data and behavior, etc. These are things I have yet to see in iOS development. (Maybe I'm looking in the wrong places)
I realize that Objective-C is a superset of C and a dynamic/loosely-typed language which will probably change the game quite a bit when it comes to good designs practices. Can anyone point me in the directions of some books/blogs/other that would help me make this mental leap from a strongly typed, managed environment to this new world while keeping my designs supple and abiding by the SOLID principles?
EDIT - I want to be clear here. I am not asking how to learn the Cocoa framework and the ins and outs of Objective-C as a language. I have found plenty of resources on that. I'm looking to take this to the next level, begin doing TDD and make sure the projects I'm building will be easy to extend and maintain.
The best way to learn the "best" way to do things is by gaining as deep an understanding as possible of how Apple's existing APIs are designed. After all, regardless of what the theoretical best way to do something is, ultimately your code is going to have to work with these APIs, so it makes sense that you should follow similar patterns in most cases.
As far as books go, there is one called Cocoa Design Patterns that covers this exact subject and, based on Amazon review, seems to be well received.
There is a plenty of design patterns specific to the Objective-C and Cocoa development, you will find a very nice summary in this book. It covers language built-in patterns as well as some more complicated and high-level (architectural if you like) patterns.
In general, SOLID principles apply and are no less relevant in a dynamic language as they are in a strongly typed one. The other thing is that in Objective-C you have the option of using strong types in your design so pretty much all "classical" OOP patterns apply, although there may be some framework / language constructs more elegant and suitable for the job.
I recommend you follow the cs193p course and read Learning Objective-C from Apple's website.
I think these two links will point you to the right direction as to how things should work on iOS.
Aspect-oriented programming is a subject matter that has been very difficult for me to find any good information on. My old Software Engineering textbook only mentions it briefly (and vaguely), and the wikipedia and various other tutorials/articles I've been able to find on it give ultra-academic, highly-abstracted definitions of just what it is, how to use it, and when to use it. Definitions I just don't seem to understand.
My (very poor) understanding of AOP is that there are many aspects of producing a high-quality software system that don't fit neatly into a nice little cohesive package. Some classes, such as Loggers, Validators, DatabaseQueries, etc., will be used all over your codebase and thus will be highly-coupled. My (again, very poor) understanding of AOP is that it is concerned with the best practices of how to handle these types of "universally-coupled" packages.
Question : Is this true, or am I totally off? If I'm completely wrong, can someone please give a concise, laymen explanation for what AOP is, an example of a so-called aspect, and perhaps even provide a simple code example?
Separation of Concerns is a fundamental principle in software development, there is a classic paper by David Parnas On the Criteria To Be Used in Decomposing Systems into Modules that may introduce you to the subject and also read Uncle Bob's SOLID Principles.
But then there are Cross Cutting concerns that might be included in many use cases like authentication, authorization, validation, logging, transaction handling, exception handling, caching, etc that spawn all the layers in software. And if you want to tackle the problem without duplication and employing the DRY principle, you must handle it in a sophisticated way.
You must use declarative programming, that simply in .net could be annotating a method or a property by an attribute and what happened later is changing the behavior of code in runtime depending of those annotations.
You can find a nice chapter on this topic in Sommerville's Software engineering book
Useful links
C2 wiki CrossCuttingConcern, MSDN, How to Address Crosscutting Concerns in Aspect Oriented Software Development
AOP is a technique where we extract and remove the cross cutting concerns (logging, Exception handling, ....) from our code into it’s own aspect. leaving our original code focusing only on the business logic. not only this makes our code more readable, maintainable but also the code is DRY.
This can be better explained by an example:
Aspect Oriented Programming (AOP) in the .net world using Castle Windsor
or
Aspect Oriented Programming (AOP) in the .net world using Unity
AOP is about crosscutting concerns i.e. things that you need to do throughout the whole application. For instance logging. Suppose you want to trace when you enter and exit a method. This is very easy with aspects. You basically specify a "handler" for an event, such as entering a method. If necessary you can also specify with "wildcards" which methods you are interested in and then it is just a matter of writing the handler code, that could for instance log some info.
Aspect Oriented Programming is basically for separating the cross-cutting concerns (Non-functional) and develop it as aspects, like, security, logging, monitor etc., kept it aside whenever you need in your application, you can use it as plug & play. Only benefit we can achieve is clean code ,less code and programmers can focus on business logic(core concerns) , so that better modularity and quality system can be developed.
Well, I think I have enough knowledge of cocoa that I can go learn another thing. What would you recommend learning after learning cocoa? (Ex. Core animation, OpenCL, core data)
I really depends what your goals are. If you want to stick to Objective-C, dive into Cocoa Touch if you haven't already. I'd also suggest checking out frameworks such as MacRuby if you want to stick to developing Mac apps, it is pretty sweet.
If you want to go somewhere totally different, I've been messing around with Rails and Android a lot recently.
Learn LISP.
It is fundamentally different from pretty much every other programming language there is. It will force you to think of problems in new ways. Even if you never ever will use LISP in a real world project (I never did) you will become a much better programmer.
Anyone who wants to call themselves a programmer should known about (spent at least a full week with):
C - to know the heavy lifting and how it actually works.
LISP - to understand functional programming.
Smalltalk/Objective-C - to understand real object oriented programming.
Prolog - to understand logic programming.
C++, and any language that derives its OOP design from it, is just C structs with function pointers. Yes Java and C# I look at you too.
Learning PostScript is a good way to broaden your understanding of the drawing model also used by Quartz and AppKit, and can be useful for prototyping your drawing code.
Learn another language. Maybe C/C++ since they are similar. Or maybe C#. Or you can try something completely different such as python, pascal, D or VB.
depends what you aim for
but if you are not strong in C/C++ I would suggest that. A) its what cocoa is based on B) if you want to port your code to other platforms, usually you'll have a good chance of reusing directly the C/C++ withou a lot of changes.
(Ex. Core animation, OpenCL, core
data)
those are just tools, if you want to specialize in iphone then its good practice to look up the various feature, look at the examples and then implement a little example for yourself.
otherwise if you have no precise goal you can also go to the bookstore and pick a random book ^^
I asked this question about Microsoft .NET Libraries and the complexity of its source code. From what I'm reading, writing general purpose libraries and writing applications can be two different things. When writing libraries, you have to think about the client who could literally be everyone (supposing I release the library for use in the general public).
What kind of practices or theories or techniques are useful when learning to write libraries? Where do you learn to write code like the one in the .NET library? This looks like a "black art" which I don't know too much about.
That's a pretty subjective question, but here's on objective answer. The Framework Design Guidelines book (be sure to get the 2nd edition) is a very good book about how to write effective class libraries. The content is very good and the often dissenting annotations are thought-provoking. Every shop should have a copy of this book available.
You definitely need to watch Josh Bloch in his presentation How to Design a Good API & Why it Matters (1h 9m long). He is a Java guru but library design and object orientation are universal.
One piece of advice often ignored by library authors is to internalize costs. If something is hard to do, the library should do it. Too often I've seen the authors of a library push something hard onto the consumers of the API rather than solving it themselves. Instead, look for the hardest things and make sure the library does them or at least makes them very easy.
I will be paraphrasing from Effective C++ by Scott Meyers, which I have found to be the best advice I got:
Adhere to the principle of least astonishment: strive to provide classes whose operators and functions have a natural syntax and an intuitive semantics. Preserve consistency with the behavior of the built-in types: when in doubt, do as the ints do.
Recognize that anything somebody can do, they will do. They'll throw exceptions, they'll assign objects to themselves, they'll use objects before giving them values, they'll give objects values and never use them, they'll give them huge values, they'll give them tiny values, they'll give them null values. In general, if it will compile, somebody will do it. As a result, make your classes easy to use correctly and hard to use incorrectly. Accept that clients will make mistakes, and design your classes so you can prevent, detect, or correct such errors.
Strive for portable code. It's not much harder to write portable programs than to write unportable ones, and only rarely will the difference in performance be significant enough to justify unportable constructs.
Even programs designed for custom hardware often end up being ported, because stock hardware generally achieves an equivalent level of performance within a few years. Writing portable code allows you to switch platforms easily, to enlarge your client base, and to brag about supporting open systems. It also makes it easier to recover if you bet wrong in the operating system sweepstakes.
Design your code so that when changes are necessary, the impact is localized. Encapsulate as much as you can; make implementation details private.
Edit: I just noticed I very nearly duplicated what cherouvim had posted; sorry about that! But turns out we're linking to different speeches by Bloch, even if the subject is exactly the same. (cherouvim linked to a December 2005 talk, I to January 2007 one.) Well, I'll leave this answer here — you're probably best off by watching both and seeing how his message and way of presenting it has evolved :)
FWIW, I'd like to point to this Google Tech Talk by Joshua Bloch, who is a greatly respected guy in the Java world, and someone who has given speeches and written extensively on API design. (Oh, and designed some exceptionally good general purpose libraries, like the Java Collections Framework!)
Joshua Bloch, Google Tech Talks, January 24, 2007:
"How To Design A Good API and Why it
Matters" (the video is about 1 hour long)
You can also read many of the same ideas in his article Bumper-Sticker API Design (but I still recommend watching the presentation!)
(Seeing you come from the .NET side, I hope you don't let his Java background get in the way too much :-) This really is not Java-specific for the most part.)
Edit: Here's another 1½ minute bit of wisdom by Josh Bloch on why writing libraries is hard, and why it's still worth putting effort in it (economies of scale) — in a response to a question wondering, basically, "how hard can it be". (Part of a presentation about the Google Collections library, which is also totally worth watching, but more Java-centric.)
Krzysztof Cwalina's blog is a good starting place. His book, Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries, is probably the definitive work for .NET library design best practices.
http://blogs.msdn.com/kcwalina/
The number one rule is to treat API design just like UI design: gather information about how your users really use your UI/API, what they find helpful and what gets in their way. Use that information to improve the design. Start with users who can put up with API churn and gradually stabilize the API as it matures.
I wrote a few notes about what I've learned about API design here: http://www.natpryce.com/articles/000732.html
I'd start looking more into design patterns. You'll probably not going to find much use for some of them, but as you get deeper into your library design the patterns will become more applicable. I'd also pick up a copy of NDepend - a great code measuring utility which may help you decouple things better. You can use .NET libraries as an example, but, personally, i don't find them to be great design examples mostly due to their complexities. Also, start looking at some open source projects to see how they're layered and structured.
A couple of separate points:
The .NET Framework isn't a class library. It's a Framework. It's a set of types meant to not only provide functionality, but to be extended by your own code. For instance, it does provide you with the Stream abstract class, and with concrete implementations like the NetworkStream class, but it also provides you the WebRequest class and the means to extend it, so that WebRequest.Create("myschema://host/more") can produce an instance of your own class deriving from WebRequest, which can have its own GetResponse method returning its own class derived from WebResponse, such that calling GetResponseStream will return your own class derived from Stream!
And your callers will not need to know this is going on behind the scenes!
A separate point is that for most developers, creating a reusable library is not, and should not be the goal. The goal should be to write the code necessary to meet requirements. In the process, reusable code may be found. In that case, it should be refactored out into a separate library, where it can be reused in the future.
I go further than that (when permitted). I will usually wait until I find two pieces of code that actually do the same thing, or which overlap. Presumably both pieces of code have passed all their unit tests. I will then factor out the common code into a separate class library and run all the unit tests again. Assuming that they still pass, I've begun the creation of some reusable code that works (since the unit tests still pass).
This is in contrast to a lesson I learned in school, when the result of an entire project was a beautiful reusable library - with no code to reuse it.
(Of course, I'm sure it would have worked if any code had used it...)
I learnt about some declarative UI languages such as XUL for Mozilla/Gecko and HTA for Microsoft at Wikipedia.
What would be the
advantages/disadvantages of these
markup languages?
Why don't common OSes and
Applications use these techniques?
Do these languages impart flexibility
to the system?
Are there any OSes that use markup
languages for displaying their UI if
not why?...
Examples of these OSes
or Applications however ancient they may be would be welcome..
HTA isn't a markup language. It's basically a container for HTML, which is the markup.
The web is driven around markup languages, so "common" applications do use them.
This is a good place to start reading. Also this.
You might also be interested in Metaprogramming. There's similar ideas to both. You describe something using markup or metadata, and then the program executes it and turns it into something useful.
Many of your questions will be answered in-depth at those links. Except for the last one. I can't think of anything specific on the OS side.
The long and short of my personal experience is that markup is great for defining structures and organization and layout. But behavior isn't well represented. If you want your UI to do something useful, you still need to program it.
Are there any OSes that use markup languages for displaying their UI
The ISPF was a rapid application development system on IBM mainframes which used a declarative markup to define the screens, then Fortran or Cobol code behind it to provide behaviour. One previous job of mine was converting such applications to an XUL based front-end to run on a PC; it was a fairly trivial conversion.
Re #2: I don't know, but I think it's for performance reasons, you couldn't waste time parsing XML on a 486 :)
Re #4: yes, if you count Linux as an OS, GTK uses some kind of markup language for UI. Also XAML in .NET.