If I write a framework that gets information from the Internet, should I make a degelate or use blocks? - objective-c

Say I'm writing a publicly available framework for the Vimeo API. This framework needs to get information from the Internet. Because this can take some time, I need to use threadin to prevent the UI from hanging. Foundation uses delegates for this, like NSURLConnectionDelegate. However, Game Kit uses blocks as callback functions.
What is the recommended way of doing this? I know blocks aren't supported in standard GCC versions, but they require less, much less code for the one that uses my framework.
Delegates, on the other hand, are real methods and when protocols are used, I'm sure the methods are implemented.
Thanks.

I really like blocks but I would be tempted to use a delegate protocol in this case. Network connections can fail in a large number of ways and their delegates tend to keep a fair amount of stateful information about them. I find that that maps well to a delegate protocol with a number of optional methods.
If you're providing a very simplified API for accessing network data then a success/failure pair of blocks might be sufficient. Personally I find that I have to deal with alot of different cases which use many delegate methods on a stateful delegate object. For example; should I retry failed connections immediately or later, does the relative priority of failed connections change, can I make us of a partial response, should I switch a connection to wifi when it becomes available, do I offer a user a chance to authenticate if prompted, do I display incremental progress in a connection? You could handle all of those with blocks but I find that I would rather have a delegate class managing the connection.
Without knowing more about what data you intend for your interface to fetch I don't know that I can be more specific but. I would be tempted to allow users of the API to manage their own connection state if possible.

It all depends on who your target audience is. If you want people writing apps for OS X 10.5 or iOS 3.x, then you need to use delegates. Otherwise, go ahead and use blocks.

It's quite a subjective question since both are valid options, but Apple seems to be shifting further towards using blocks for "throw-away" methods.

The main question would be your target audience.
Block are limited to Snow Leopard (and IOS 4? cant remember).
If you want your framework to be usable by previous operating systems, you can't use blocks.
If you're happy with os limitations, then go with blocks and NSOperationQueue, it's really good and simple to use.
Better, you could offer both options..

I would recommend using blocks, and if you do it right, you can support 10.5 at the same time.
Check out the open-source PLBlocks runtime, it allows you to seamlessly use blocks on both 10.5 and 10.6.

Related

Asynchronous Messaging Protocol compatibility outside Python (and twisted)

The Asynchronous Messaging Protocol is a simple protocol in python-twisted. I have a fairly complete app (python, twisted, kivy) using it. The client-server architecture implements a view-controller sort of relationship, with allmost all business logic server-side and the UI interface code simply reflecting change in state of models (sent by server) and sending the appropriate AMP messages.
Here is a list of implementations of the AMP protocol in other languages, but some seen unfinished, and most don't seem to be actually being used for anything serious.
The use-case I'm looking at is a fully Python app which currently works on Windows, Linux, and Android (possibly iOS if I ever get round to building that). And possibly, in the future, replacing the View/UI bit with 'native' language (Java/Swift on Android, for instance) while keeping the business bits in python and twisted.
So I have two main questions:-
Is it accurate to say that AMP is only really used within python-twisted and those programs that use it?
Are there other, more generally useful network protocols which are both implemented and fairly easy to use in twisted as well as being non-specific (e.g. jabber is really only for chat)? Preferably which don't require a server like WAMP/autobahn do (if I understand correctly) so it can be self-contained within any device which can run python.
This isn't entirely accurate. Twisted just happens to use it the most. Other languages make use of AMP, it's just that AMP hasn't become very popular given popularity of other more robust options like AMQP (ZeroMQ, RabbitMQ, WebsphereMQ, etc).
AMP is about as simple as it can get. Also, it's unlikely you will find a solution without a server.
AMP is not locked to Twisted or Python. There are other implementations in other languages but like you said some are not used in a "serious" manner and often go unmaintained. Don't let that scare you off because the protocol is so simple, there often isn't much to do after it's been implemented. You will be happy to know that the actual protocol hasn't changed much and isn't very difficult to implement in any language if you follow the design. If you want something more generic, cross platform, and ensured compatibility, then consider HTTP requests.

What is the Preferred Method for Cross-Process Communication in OS X?

Quite simply I need a basic method for sending broadcast type events between processes owned by different users, so that I can negotiate a simple queueing mechanism (to prevent the processes from attempting to do all of their work at the same time).
Now, the only system I'm aware of for doing this would be via notifyd, or more specifically, by using the various notify functions available in Objective-C (or actually, C++/C?).
However, at lot has changed, and in particular I'm trying to dive back in using Swift, while writing an application that will play nice with the new Mac App Store's required sandboxing scheme. So I'm curious, is communication via notifyd still the preferred mechanism for inter-process communication in OS X, or is there something else I would be better to use?
As I say, my needs are fairly simple; I really just need to be able to let other processes know when a new process starts up, so they can negotiate a simple system for performing their work in a round-robin fashion, without requiring some kind of central process (as that scheme would need elevated permissions to work).
There are several ways to do this in OS X, a list covers many of them here. I'd say the preferred method depends on what you are trying to accomplish. If you really are just needing to send a notification, then the BSD notification system is a lightweight way to do that.

Possible to share information between an add-on to an existing program and a standalone application? [duplicate]

I'm looking at building a Cocoa application on the Mac with a back-end daemon process (really just a mostly-headless Cocoa app, probably), along with 0 or more "client" applications running locally (although if possible I'd like to support remote clients as well; the remote clients would only ever be other Macs or iPhone OS devices).
The data being communicated will be fairly trivial, mostly just text and commands (which I guess can be represented as text anyway), and maybe the occasional small file (an image possibly).
I've looked at a few methods for doing this but I'm not sure which is "best" for the task at hand. Things I've considered:
Reading and writing to a file (…yes), very basic but not very scalable.
Pure sockets (I have no experience with sockets but I seem to think I can use them to send data locally and over a network. Though it seems cumbersome if doing everything in Cocoa
Distributed Objects: seems rather inelegant for a task like this
NSConnection: I can't really figure out what this class even does, but I've read of it in some IPC search results
I'm sure there are things I'm missing, but I was surprised to find a lack of resources on this topic.
I am currently looking into the same questions. For me the possibility of adding Windows clients later makes the situation more complicated; in your case the answer seems to be simpler.
About the options you have considered:
Control files: While it is possible to communicate via control files, you have to keep in mind that the files need to be communicated via a network file system among the machines involved. So the network file system serves as an abstraction of the actual network infrastructure, but does not offer the full power and flexibility the network normally has. Implementation: Practically, you will need to have at least two files for each pair of client/servers: a file the server uses to send a request to the client(s) and a file for the responses. If each process can communicate both ways, you need to duplicate this. Furthermore, both the client(s) and the server(s) work on a "pull" basis, i.e., they need to revisit the control files frequently and see if something new has been delivered.
The advantage of this solution is that it minimizes the need for learning new techniques. The big disadvantage is that it has huge demands on the program logic; a lot of things need to be taken care of by you (Will the files be written in one piece or can it happen that any party picks up inconsistent files? How frequently should checks be implemented? Do I need to worry about the file system, like caching, etc? Can I add encryption later without toying around with things outside of my program code? ...)
If portability was an issue (which, as far as I understood from your question is not the case) then this solution would be easy to port to different systems and even different programming languages. However, I don't know of any network files ystem for iPhone OS, but I am not familiar with this.
Sockets: The programming interface is certainly different; depending on your experience with socket programming it may mean that you have more work learning it first and debugging it later. Implementation: Practically, you will need a similar logic as before, i.e., client(s) and server(s) communicating via the network. A definite plus of this approach is that the processes can work on a "push" basis, i.e., they can listen on a socket until a message arrives which is superior to checking control files regularly. Network corruption and inconsistencies are also not your concern. Furthermore, you (may) have more control over the way the connections are established rather than relying on things outside of your program's control (again, this is important if you decide to add encryption later on).
The advantage is that a lot of things are taken off your shoulders that would bother an implementation in 1. The disadvantage is that you still need to change your program logic substantially in order to make sure that you send and receive the correct information (file types etc.).
In my experience portability (i.e., ease of transitioning to different systems and even programming languages) is very good since anything even remotely compatible to POSIX works.
[EDIT: In particular, as soon as you communicate binary numbers endianess becomes an issue and you have to take care of this problem manually - this is a common (!) special case of the "correct information" issue I mentioned above. It will bite you e.g. when you have a PowerPC talking to an Intel Mac. This special case disappears with the solution 3.+4. together will all of the other "correct information" issues.]
+4. Distributed objects: The NSProxy class cluster is used to implement distributed objects. NSConnection is responsible for setting up remote connections as a prerequisite for sending information around, so once you understand how to use this system, you also understand distributed objects. ;^)
The idea is that your high-level program logic does not need to be changed (i.e., your objects communicate via messages and receive results and the messages together with the return types are identical to what you are used to from your local implementation) without having to bother about the particulars of the network infrastructure. Well, at least in theory. Implementation: I am also working on this right now, so my understanding is still limited. As far as I understand, you do need to setup a certain structure, i.e., you still have to decide which processes (local and/or remote) can receive which messages; this is what NSConnection does. At this point, you implicitly define a client/server architecture, but you do not need to worry about the problems mentioned in 2.
There is an introduction with two explicit examples at the Gnustep project server; it illustrates how the technology works and is a good starting point for experimenting:
http://www.gnustep.org/resources/documentation/Developer/Base/ProgrammingManual/manual_7.html
Unfortunately, the disadvantages are a total loss of compatibility (although you will still do fine with the setup you mentioned of Macs and iPhone/iPad only) with other systems and loss of portability to other languages. Gnustep with Objective-C is at best code-compatible, but there is no way to communicate between Gnustep and Cocoa, see my edit to question number 2 here: CORBA on Mac OS X (Cocoa)
[EDIT: I just came across another piece of information that I was unaware of. While I have checked that NSProxy is available on the iPhone, I did not check whether the other parts of the distributed objects mechanism are. According to this link: http://www.cocoabuilder.com/archive/cocoa/224358-big-picture-relationships-between-nsconnection-nsinputstream-nsoutputstream-etc.html (search the page for the phrase "iPhone OS") they are not. This would exclude this solution if you demand to use iPhone/iPad at this moment.]
So to conclude, there is a trade-off between effort of learning (and implementing and debugging) new technologies on the one hand and hand-coding lower-level communication logic on the other. While the distributed object approach takes most load of your shoulders and incurs the smallest changes in program logic, it is the hardest to learn and also (unfortunately) the least portable.
Disclaimer: Distributed Objects are not available on iPhone.
Why do you find distributed objects inelegant? They sounds like a good match here:
transparent marshalling of fundamental types and Objective-C classes
it doesn't really matter wether clients are local or remote
not much additional work for Cocoa-based applications
The documentation might make it sound like more work then it actually is, but all you basically have to do is to use protocols cleanly and export, or respectively connect to, the servers root object.
The rest should happen automagically behind the scenes for you in the given scenario.
We are using ThoMoNetworking and it works fine and is fast to setup. Basically it allows you to send NSCoding compliant objects in the local network, but of course also works if client and server are on he same machine. As a wrapper around the foundation classes it takes care of pairing, reconnections, etc..

What's the difference between libev and libevent?

Both 2 libs are designed for async i/o scheduling, and both engages epoll on linux, and kqueue on FreeBSD, etc.
Except superficial differences, I mean what is the TRUE difference between these two libraries? regarding to architecture, or design philosophy?
As for design philosophy, libev was created to improve on some of the architectural decisions in libevent, for example, global variable usage made it hard to use libevent safely in multithreaded environments, watcher structures are big because they combine I/O, time and signal handlers in one, the extra components such as the http and dns servers suffered from bad implementation quality and resultant security issues, and timers were inexact and didn't cope well with time jumps.
Libev tried to improve each of these, by not using global variables but using a loop context for all functions, by using small watchers for each event type (an I/O watcher uses 56 bytes on x86_64 compared to 136 for libevent), allowing extra event types such as timers based on wallclock vs. monotonic time, inter-thread interruptions, prepare and check watchers to embed other event loops or to be embedded and so on.
The extra component problem is "solved" by not having them at all, so libev can be small and efficient, but you also need to look elsewhere for an http library, because libev simply doesn't have one (for example, there is a very related library called libeio that does asynchronous I/O, which can be used independently or together with libev, so you can mix and match).
So in short, libev tries to do one thing only (POSIX event library), and this in the most efficient way possible. Libevent tries to give you the full solution (event lib, non-blocking I/O library, http server, DNS client).
Or, even shorter, libev tries to follow the UNIX toolbox philosophy of doing one thing only, as good as possible.
Note that this is the design philosophy, which I can state with authority because I designed libev. Whether these design goals have actually been reached, or whether the philosophy is based on sound principles, is up to you to judge.
Update 2017:
I was asked multiple times what timer inexactness I refer to, and why libev doesn't support IOCPs on windows.
As for timers, libevent schedules timers relative to some unknown base time that is in the future, without you knowing it. Libev can tell you in advance what base time it will use to schedule timers, which allows programs to use both the libevent approach and the libev approach. Furthermore, libevent would sometimes expire timers early, depending on the backend. The former is an API issue, the latter is fixable (and might have been fixed since - I didn't check).
As for IOCP support - I don't think it can be done, as IOCPs are simply not powerful enough. For one thing, they need a special socket type, which would limit the set of handles allowed on windows even more (for example, the sopckets used by perl are of the "wrong" type for IOCPs). Furthermore, IOCPs simply don't support I/O readyness events at all, they only can do actual I/O. There are workarounds for some handle types, such as doing a dummy 0-byte read, but again, this would limit the handle types you can use on windows even more and furthermore would rely on undocumented behaviour that is likely not shared by all socket providers.
To my knowledge, no other event library supports IOCPs on windows, either. What libevent does is, in addition to the event library, it allows you to queue read/write operations which then can be done via IOCPs. Since libev does not do I/O for you, there is no way to use IOCPs in libev itself.
This is indeed by design - libev tries to be small and POSIX-like, and windows simply does not have an efficient way to get POSIX-style I/O events. If IOCPs are important, you either have to use them yourself, or indeed use some of the many other frameworks that do I/O for you and therefore can use IOCPs.
The great advantage of libevent for me is the built-in OpenSSL support. Bufferevent interface, introduced in 2.0 version of libevent API, handles the secure connections almost painlessly for the developer.
May be my knowlege has gone out of date but it seems like libev does not support this.
Here are souce code link,
libev: https://github.com/enki/libev
libevent: https://github.com/libevent/libevent
I find the latest commitee of libev is 2015, but the livevent still have commitees untill July 2022, so a library have persons to maintain it is also important.

Web app using API for everything?

I'm about to start planning an internal project management tool for my company. One thing that has always led me wondering is APIs.
Would it be seen as bad practice / too inefficient to create a API first and build the actual site using those API calls rather than implement it twice?
Let me know your thoughts!
I completely agree that developing an API will give you a decoupled architecture, and I recommend that.
However, I feel you should be warned that developing the API first increases your risk of developing the wrong API (PM, by the way, is largely about reducing project risk). You will also be tempted to gold-plate your API-- program features that may go unused, which wastes time. Developing the API in conjunction with the application guarantees that it correctly serves the actual application's or applications' needs. Unless you are confident in the accuracy of and your understanding of the requirements, I suggest programming the API one feature at a time with the application.
For example, as you develop the application and discover the precise point at which you need to make an API call, create an interface (depending on the technology) that looks exactly like what you need. You can stub that interface to get the app to run, which is a great tool for checking that the app is still on track with user expectations. ("You want it to work like this, right?") Later, you can implement that interface. If by chance requirements suffer alteration, you won't have spent time building now obsolete infrastructure.