For what programs are Objective C and Ruby ideal on the Mac? - objective-c

as a Mac outsider it seems that two popular programming languages on the Mac appear to be Objective C and Ruby.
From what I understand the main API Cocoa seems to be written in and optimized for Objective C, but it is also possible to use Ruby for that.
Are there different areas where each language is ideal, for example, I could imagine Objective C could be ideal for a GUI layer, or standalone desktop app, and Ruby could be good for web services etc. What about classic business logic, or data access layers?
What language would be a good choice for a library of services for example? Can we write a library in one language and link to it from a main program written in the other language?
If I wanted to write a layered enterprise application using domain driven design and dependency injection which languages could support each concerns? Are things like DDD and DI common amongst Mac devs?
Just a curious outsider.

If I were to write a big application, I would stick to Objective-C only. It’s not hard, it’s the most supported option and for the foreseeable it will stay that way. As for Ruby, there used to be Java support in Cocoa that does not exist anymore. I’d hate to have a large legacy application written in mixture of Java and Objective-C and having the prospect of rewriting the Java parts or sticking with older OS.
(The previous paragraph applies to writing pure desktop applications. If you wanted and could write a part of the application say as a local webservice it would be quite different, as the Ruby support there would be much more dependable. Depends on your people, experience, goals and other variables.)
Dependency Injection and DDD are both abstract ideas, so yes, you can certainly do that in Cocoa. I have no idea about how many Mac devs do it. As for DI, there is strong support for loose coupling in Cocoa and the whole technology stack (see Interface Builder, KVO/KVC or bindings).
Hope that helps.

Related

The suitability of a procedural programming language for graphical applications

Is anyone able to explain or evaluate the suitability of a procedural programming language for graphical applications, against object orientated programming for instance. What are the advantages and disadvantages of both?
It is possible to use either since you will probably be using some framework to design the GUI.
For example, if you are considering C then you will probably use GTK as framework. But you can still use the C bindings for other frameworks such as WxWidgets (written in C++).
But: Procedural Programming isn't really strong because a GUI isn't a procedure.
A procedural environment relies on location in the program (which
usually translates to time) to distinguish between different kinds of
interactions. A GUI environment relies on location on the screen to
distinguish between different kinds of interactions.
So, in a procedural environment you either smush everything together,
so you have a place in the program which does everything, or you have
a fake GUI, only some parts of the screen will work at any specific
point in time.
That said, I should point out that it's not impossible to write a
decent GUI from a procedural environment -- it's just a bit tricky.
And then there's the other way of looking at it: a GUI is like,
chocolate, with lots of caramel, and a procedure is, like, all this
paperwork. They just don't mix all that well.

Objective-C or Cocoa First

I am new to Mac/iOS development. I am coming over from C# .net. My question is should I read on Objective-C first or Cocoa? I don't want to buy two books and start reading one and find out I should have only got one book and read it.
Thanks
Curtis
Seeing as how the Cocoa frameworks were built with Objective-C, learning the language is the logical first step.
You don't need to become fluent in the language, to the point of exchanging implementations and dispatching blocks like nobody's business, all before you learn the frameworks. Indeed, such an undertaking would be folly.
At the same time, you can't learn Cocoa without a firm grasp of OOP, dynamic dispatch (messaging), MVC, and other important concepts. Some of these underlie Objective-C, some are part of Objective-C, and some are separate from Objective-C.
I say start learning those concepts, including the basic portions of the language, and then pick up the framework before you get too far into it. Like Regexident said in their comment on one of the other answers, you pretty much can't practice Objective-C without using Cocoa anyway, so it won't be long before you need to start into the framework to in order to practice the language—at which point you'll be practicing with both.
Relevant Apple documentation:
Object-Oriented Programming with Objective-C
Learning Objective-C: A Primer
Cocoa Fundamentals Guide
The (full, gory details of the) Objective-C Programming Language
If you don't already know C, start there. As with Objective-C, you don't need to become a full-blown C guru, but you do need to understand pointers, declarations vs. statements, how function calls work, the differences between the primitive types, and other basic concepts.
Objective-C is just a superset of C. All the Cocoa books that I've seen introduce Objective-C in the first part and then go on to describe how to use the frameworks. I'd recommend just getting a Cocoa book, just leaf through it and make sure the Objective-C primer in it is meaty enough for you.
I believe there is a lot of good information in the various answers here (especially #Peter Hosey's), but I do want to make sure we're not confusing the OP.
You can't practically learn ObjC without learning Cocoa today. It's like learning C# without learning .NET. You absolutely should focus very early on the patterns and underlying concepts of Cocoa (like MVC, as PH notes). But you'll get that best by working through things that teach you practical iOS or Mac development techniques rather than working through something that tries to provide a more "pure ObjC" experience such as writing command-line apps (though even Foundation is part of Cocoa, so even then you can't escape it).
The key lesson here is that I recommend new students focus on resources that teach them the whole process they need to develop for iOS or OS X. It's worth learning Xcode, ObjC, and Cocoa(Touch) together, along with MVC, target/selector, and the other critical patterns, rather than trying to take them one at a time.
PH is absolutely correct that you don't need to learn all of ObjC in one go. You can do a lot of development and never use blocks or KVO for instance. (That may change in future versions of iOS, but it's still true today.)

PyObjC / Ruby bridge. Is it worthwhile ?

Years ago wanting to write Mac software and having loads of experience with Java WebObjects I tried the java bridge but decided to bite the bullet and learn Objective-C (fortunately since I would have hated having my software deprecated with the bridge). Later I fooled around with RubyCocoa. I learnt Ruby (found it interesting indeed), but found out the hard way that the bridge was far from mature or stable and at the end I ended porting the code back to Objective-C.
Since years have passed, I'm wondering if it is worthwhile investing some time with MacRuby, or even learning Python to use PyObjC. As much as I like Objective-C, I recall being way more productive with the Ruby bridge when it didn't crash. I just would hate investing time to end up with crashy software again.
I would say MacRuby is the way to go if you want to try one of the bridges. It's being developed by Laurent Sansonetti, who's a Senior Software Engineer at Apple working on Ruby.
It's quite functional now, and integrates nicely with the native frameworks. Worth a look, particularly if you already have Ruby experience.
If you want to learn Cocoa programming, ignore the bridges. They will only make writing Cocoa applications more difficult and you will waste a bunch of time getting up to speed.
Specifically, you will need to learn Objective-C to be able to understand both the APIs and design patterns of the system frameworks. Furthermore, all of the documentation and tools are written specifically to support Objective-C.
The bulk of your time in learning Cocoa programming will be spent on said APIs and design patterns; the actual language part is relatively small, by comparison.
Note also that the bridges necessarily incur an impedance mismatch in an attempt to map not-quite-the-same functionality from one language to another.
Frankly, if you know Ruby, then Objective-C should be trivially easy; the object models are very similar.
My personal opinion is use ObjC for Mac native apps.
Use Ruby/Python where they supposed to work good natively without unreliable interfaces with questionable support.
Here is why it is NOT a waste of time. In some cases, Ruby and Python have awesome and well developed libraries that are not available in Objective-C and would not likely be.
That is one of the best use cases.
Example: you wouldn't want to reimplement Rails in Objective-C, (some people might) but you could easily use it, parts of it to power a Cocoa app with MacRuby.
Well MacRuby is dead. There is the commercial RubyMotion.
There is still PyObjc, RubyCocoa and mruby.
One of the other intriguing use-cases is to provide script ability that doesn't stink like AppleScript and OSA.
There are valid reasons.

What are alternatives to Objective-C for Mac programming?

I've become very comfortable in the world of pointer-free, garbage-collected programming languages. Now I have to write a small Mac component. I've been learning Objective-C, but as I confront the possibility of dangling pointers and the need to manage retain counts, I feel disheartened.
I know that Objective-C now has garbage collection but this only works with Leopard. My component must work with Tiger, too.
I need to access some Cocoa libraries not available to Java, so that rules out my usual weapon of choice.
What are my alternatives? Especially with no explicit pointers and automatic garbage collection.
What do you mean by "component?" Do you mean a chunk of code or a library you are going to hand to other people to link into their apps? If so then it is not realistic to use any of the bridged languages at this time. While a lot of the bridges are very nice, they almost always have complications and issues that most app developers will not be willing to deal with to use a single component, especially if it involves bringing in a substantial runtime.
The bridges are most valuable to bridge other language libraries into your Objective C app. While you can write fairly complete apps using them, doing so often requires a better understanding of Objective C than simply writing an Objective C application, since you need to understand and cope with the language, object model, threading, and memory allocation impedance mismatches that occur.
This is also why many people argue that even if you are quite familiar with a language, trying to learn Cocoa using that language through a bridge is generally more difficult that learning it using Objective C.
Finally, much of the recent support for bridged languages was due to "BridgeSupport," a feature was added in Leopard. Even bridges that predate that have been migrating towards, sometimes in such a way that using the bridged language on Tiger and Leopard can have substantial differences. Also, there is currently no bridge support for iPhone, and most bridged languages will not work on it, if that is an issue.
Ultimately, if you are writing a library that is going to be linked into other apps, you need to run on Tiger and Leopard, and you need to access Cocoa only APIs I think you will find using any non-Objective C solution quite difficult.
You can try PyObjC to write Cocoa apps in python, or MacRuby if you are interested in Ruby.
You shouldn't be intimidated by Cocoa's retain/release reference counting. It's much, much easier in practice than GC fans would have you believe. The Cocoa memory management rules are dead simple, they only affect a tiny amount of your code, and even that code can be generated automagically.
Here's the trick. You encapsulate your MM code in accessor methods, and always use accessors. Xcode has built-in scripts to generate the appropriate accessors, or if you need more flexibility there are 3rd-part apps like Accessorizer.
This isn't an intrusive approach - you only need to worry about retaining an object if you're going to need to keep it for later use, and if you're going to do that you'll need an instance variable in which to keep it anyway. And, if you're using KVO and bindings, you'll need to use accessors to make sure the appropriate observer notifications are fired. Basically, if you're using good OOP and Cocoa practices, there's practically no additional thought or effort involved with memory management.
Most folks who have difficulty with Cocoa's "manual" memory management are doing so as a result of misusing it. The most common mistake is to scatter the relevant code all over the place. That means that a missing retain, extra release, etc will be difficult to find.
Try any of the Cocoa bridges listed in here http://www.cocoadev.com/index.pl?CocoaBridges
You can also try F-Script - a smalltalk dialect that is written specifically for MacOSX/Cocoa.
RubyCocoa is getting steadily more impressive, and I've seen lots of successful implementations using it. That is, of course, if Ruby's your cup of tea...
You can always use REALbasic (www.realsoftware.com). Real easy and fun to use, not free though. You can not make dylibs (or dll's) using it, but you can use dylibs and dll's in your code. And you can use cocoa libraries as well
Don't forget that you can use java as well, and i don't mean java-cocoa bridge, i mean actual java.
There's also a package from apple that provides you with access to a couple of osx features as well.
Also to comment on Shem's point, if your targeting osx 10.5 and above, you can take advantage of garbage collection.
If you want lisp syntax then Nu is a lisp implemented on top of Objective-C http://www.programming.nu/
Also, FreePascal can generate native Carbon Apps (in progress working for Coccoa)
Look at Python and wxPython (the wxWidgets in Python).
The wxWidgets have a very elegant App-Doc-View application design pattern that's very, very nice. It's not used enough, IMO. I haven't found any wxPython examples of this App-Doc-View example, so you have to use the C examples to reason out how it would work in Python.
I'd post examples, but I haven't got it all working yet, either.
.NET via Mono mono-project.com
See NObjective (http://code.google.com/p/objcmapper/) bridge to Cocoa. It provides more features than others with less overheads.
I'm looking into Mono too. Objective-C is a little too bizarre for me at this point. Too many years doing C/C++, Java, C#, Perl etc. I suppose. All these seem pretty easy to float between. Not so for Objective-C. Love my Mac but afraid it will take too much precious time to master the language.

Embedded platform development in (!C)

I'm curious to see how popular the alternatives to C are in the embedded developer world e.g. Ada...
I've only ever used C (with a little bit of assembler), but then my targets have very limited resources. Is there a move else where in this space to something else? What is winning the ware in set top boxes?
If !C what was the underlying reason?
Compiler support for target
Trace \ static analysis tools
other...
Thanks.
Forth is quite popular for embedded development.
Also, while Smalltalk is probably not popular in the embedded community, embedded development is definitely popular in the Smalltalk community.
When you say "embedded development", keep in mind that you have to consider the scale of the project.
When programming something on the scale of a microcontroller or the firmware for an ASIC, you tend to see C and assembly dominate the scene. Embedded developers tend to "specialize" in these languages since compilers for them are available for nearly every embedded target platform. If your project migrates from, say, a chip with a PowerPC core to a chip with an ARM core, you can be fairly confident that your C code will not be overly difficult to port over. Some chips do have compilers available for other languages, but typically they do not match the C compiler in terms of efficiency of the resulting binary. Since embedded systems are often low on resources, system designers want to make their code as efficient as possible (also one reason why you see a lot of assembly language code). I have seen development tools available for languages such as C++, Pascal, Basic, and others, but they are typically niche tools that are not mature enough to match the efficiency of the available C compilers. Debugging tools for these languages also tend to be harder to find than what is available for C/assembly.
You also mentioned set-top boxes. Embedded systems on this scale can pack the equivalent power of a desktop computer from 7-8 years ago. Their available RAM, storage space, and processing power allows them to run full-featured operating systems and interpreters for higher-level languages. On these more powerful systems you will still see C and assembly language being used (for driver code, if nothing else), but other languages (such as Java, Lua, Tcl, Ruby, etc) are becoming more and more common. Using interpreted languages makes porting code from one platform to another even easier, as long as the platform has sufficient resources to handle the overhead of the language interpreter. Any low-level code that interfaces directly with hardware (drivers) with still typically use assembly or C since high-level languages don't always have the capability to do this sort of thing. Anything running as an application on top of the embedded operating system can usually be developed and tested inside an emulator or virtual machine, and so you will see a lot of code being developed in whatever language the developer happens to be comfortable with.
TLDR version: C is popular because is it a versatile language that nearly all developers are familiar with. Assembly is popular because it allows for low-level hardware access in ways that would otherwise be difficult or impossible. Interpreted/scripted languages such as Java are becoming more popular, but the resource requirements of the interpreters for these languages may be too much for some embedded systems to handle. The quality and variety of development/debugging tools availability for the C and assembly languages also makes these options attractive.
Perhaps not quite the large step from C you're looking for but C++ is also resonably popular for embedded projects.
I haven't used myself, but Bascom is quite popular for AVR microcontrollers. It is a Basic IDE that lets you interact with the peripherals very easily. I've met hardware people that successfully use it.
Yes. Java is becoming more popular - many processors have added instructions that pertain primarily to Java and similar languages (.net). Also, uclinux runs on microcontrollers, so you can use practically any language for some of the larger micros.
Basic is still common, as is assembly.
You'll see Ada in certain gov't projects.
And some engineers are even putting Lua and other interpreters on their micros so their customers can extend the functionality.
But C is still dominant.
-Adam
In the early 90 I did a lot of embedded development on the 8051 using Intel PLM51 and the DCX51 operating system.
PLM is very simple language – but very powerful
We now use C
If you work in the smartcard space, you get to use Java Card. Yep, Java, on an 8-bit micro. It's kinda fun, actually. I get to develop in Eclipse, test ( & debug!) on the PC simulator, and can be confident that it'll run the same on the card. It's just such a pity Java is a terrible language for embedded apps :)
I've used EC++ (Embedded C++) quite extensively.
Also, PICBasic has been popular with the PIC'ers for eons now.
I have used Ada in embedded project for military avionics because of customer requirements. There is lots of Ada tools for embedded development but most of it is very expensive. Personally I would just use C.
There is a Pascal compiler for 8051
JAL
There is a group of folks working to make Lua a viable option for embedded work. They are targeting primarily 32-bit ARMs with 256K FLASH and 64K RAM or better, and seem happy with their work so far.
They are partly inspired by the classic BASIC-Stamp, a BASIC interpreter running in a moderately powerful PIC with the program itself stored in a serial EEPROM device.
At work, I am still maintaining a customer's embedded system that is written in a compiled flavor of BASIC running in a Zilog Z180 CPU. 1980's technology all around, with most of the system still built out of 24-bin DIP packages in sockets. The compiler runs under CP/M-80 running in a Z80 simulator, that itself runs in the MS-DOS simulator built into Windows. Aside from the shear amazement that anything productive can be done this way (and that you can still buy 27C256 UV erasable EPROMS, and that my nearly 20 year old Data/IO PROM programmer still works) I really wish the customer could afford to move to a new hardware design so the system could be rewritten in a maintainable language.
Depends on the microcontroller, many of them have C but the compilers are horribly, assembler is usually easy and the best performing, most efficient, etc. Ones like the msp, avr, and arm are good for C compilers and for those I would and do use C (depending on the problem).
I would stick to C or assembler, you are wasting memory, performance, and resources using anything else.
Pascal, Modula2 work fine too. Essentially they are pretty much equivalent to C, except for the inability to do alloca (though some have that as extension).
But the core problem will be the problem with any !C compiler: what do you prefer, a better compiler/toolchain or the language of preference.
Despite I like the Wirthian languages most, I simply use C, and am living with the consequences, simply because the toolchain is better.
There have been examples in the past (Pascals, or even tightly compiled Basics), but C is mostly the norm. I never understood why.
I worked on a device which ran some incredibly old version of python (1.4 or something). There was no way to debug it (other than printing debug messages) so when your code hit an exception everything would just stop and you scratched your head for an hour. Whenever you made a change and upgraded the code it was running, it took about 10 minutes to interpret and compile it.
Needless to say we scrapped that and replaced the microcontroller with one that ran C.
See this related question:
What languages are used for real-time systems programming.
In response to your "why" question, from the standpoint of government/military acquisition, there is a perception that Java (language, platform, etc...) is the lingua franca these days and that economies of scale in the language will reduce acquisition and maintenance cost. There's also a hope that one can efficiently train a competent Java programmer to be a reasonable RT/embedded programmer in Java faster than if they are required to learn a new language. This rationale is suspect, in my opinion, but it does answer the "why" question.
If you include the iPhone as an embedded platform then Objective-C
Considering how many times I've had a Java out-of-memory exception on my phone(most of the time I do anything remotely interesting), I'd run away from Java like a bat out of a hot place.
I've heard that Erlang was designed for use for cell phones. I think Lisp is a good architecture for remote device support- if the device cna handle the run-time.
A lot of home-brew users and small companies needing a cheap solution have found Tiny Tiger and Basic STAMP (using BASIC) meets their needs.