Objective-C networking - best practices? - objective-c

I'm building an Objective-C app that has both a server and a client. The client can send updates to the server, and the server needs to be able to send updates to each connected client. I've been thinking about how best to implement this system, but am asking for your suggestions.
Currently, I'm thinking that when new updates are available, the server will use threads to send the update to each client in turn. If a client times out, they are disconnected.
I have very little networking experience, so am asking your insight.
Do you think that this system would work well?
If so, do you have any suggestions about how to do the threading? Any NS classes you can point me at? There's got to be some kind of queue I can use, I'm thinking.
Any other thoughts?
EDIT: I do not expect the client count to get much above 50 or so, at the max.

As long as both client and server are OS X apps and can both be written in Objective-C using the Cocoa frameworks, I would highly recommend you take a look at the Distributed Objects (DO) technology in Cocoa. I won't try to give a tutorial in Distributed Objects here, just explain why it might be useful...
DO handles asynchronous network details for you (all your client updates could happen on a single thread). In addition the semantics of communication with a remote object (client to server or visa versa; DO is bidirectional once the connection is established) are very similar to in-process communication. In other words, once you have a reference to the remote object (really an NSDistantObject which acts as a proxy to the object on the other end of the connection), your client code can send messages to the remote object as if it were local:
[remoteServer update:client];
from the client or
[[remoteClientList objectAtIndex:i] update:server];
from the server. I'll leave the details of setting up the connection and for getting the remoteServer or remoteClient reference to you after reading the Distributed Objects programming guide.
The downside of using DO is that you are tied to Cocoa; it will be very difficult to write a non-Cocoa client or server that communicates using Distirbuted Objects. If there's a chance you may want to have non-Cocoa client or server implementations, you should not use DO. In this case, I would recommend something simple with a lot of cross-platform and language support. A REST-style API over HTTP is a good option. Have a look at the Cocoa URL Loading System documentation for info on how to implement HTTP requests and responses. Have a look at Apple's CocoaHTTPServer example code or a code.google.com project of the same name for info on implementing an HTTP server in your Cocoa code.
As a very last option, you can take a look at the Cocoa Stream Programming Guide if you want to implement your own network protocol. NSStream's subclasses will let you listen on a network socket and handle asynchronous reads/writes to/from that socket. A lot of people use AsyncSocket for this purpose. It wraps the (lower-level) CFStream and CFSocket and makes writing network code somewhat easier.

When the server sends updates to the clients, it would probably be easier to just have one thread handle them all, and just use async sockets. Of course this would depend on how many clients you had to deal with too.

There's several networking examples in the apple developer side.
One I would recommend that you check out is the URLCache, which can be downloaded.
Quoting from the Apple's documentation for this example:
URLCache is a sample iPhone application that demonstrates how to download a resource off the web, store it in the application's data directory, and use the local copy of the resource. URLCache also demonstrates how to implement a couple of caching policies:

An interesting option is the BLIP protocol from Jens Alfke. It's like a stripped down version of BEEP: a message oriented networking system. It basically provides the low-level abstractions for a bidirectional message pipe so you can concentrate on layering your communication protocol on top of it.
It has some worthy followers such as Marcus Zarra (author of the CoreData bible) and Gus Mueller of Flying Meat software.

I don't know how you plan to design you system, but usually a server cannot connect to a client; the client must initiate the communication. With a low limit of 50 clients, you may not be looking at a web-server/client-like implementation...
That said, there are basically two ways to handle client server communication:
1. The client polls the server periodically to get updates
2. The client keeps a connection open to the server and the the server responds with a well known (as in both sides understand it) protocol.

Related

Async WCF and Protocol Behaviors

FYI: This will be my first real foray into Async/Await; for too long I've been settling for the familiar territory of BackgroundWorker. It's time to move on.
I wish to build a WCF service, self-hosted in a Windows service running on a remote machine in the same LAN, that does this:
Accepts a request for a single .ZIP archive
Creates the archive and packages several files
Returns the archive as its response to the request
I have to support archives as large as 10GB. Needless to say, this scenario isn't covered by basic WCF designs; we must take additional steps to meet the requirement. We must eliminate timeouts while the archive is building and memory errors while it's being sent. Both of these occur under basic WCF designs, depending on the size of the file returned.
My plan is to proceed using task-based asynchronous WCF calls and streaming mode.
I have two concerns:
Is this the proper approach to the problem?
Microsoft has done a nice job at abstracting all of this, but what of the underlying protocols? What goes on 'under the hood?' Does the server keep the connection alive while the archive is building (could be several minutes) or instead does it close the connection and initiate a new one once the operation is complete, thereby requiring me to properly route the request through the client machine firewall?
For #2, clearly I'm hoping for the former (keep-alive). But after some searching I'm not easily finding an answer. Perhaps you know.
You need streaming for big payloads. That is the right approach. This has nothing at all to do with asynchronous IO. The two are independent. The client cannot even tell that the server is async internally.
I'll add my standard answers for whether to use async IO or not:
https://stackoverflow.com/a/25087273/122718 Why does the EF 6 tutorial use asychronous calls?
https://stackoverflow.com/a/12796711/122718 Should we switch to use async I/O by default?
Each request runs over a single connection that is kept alive. This goes for both streaming big amounts of data as well as big initial delays. Not sure why you are concerned about routing. Does your router kill such connections? That's a problem.
Regarding keep alive, there is nothing going over the wire to do that. TCP sessions can stay open indefinitely without any kind of wire traffic.

Dynamic server discovery list

I'd like to create a web service that an application server can contact to add itself to a list of servers implementing the application. Clients could then contact the service to get a list of servers. Something similar to how minecraft's heartbeats work for adding your server to the main server list.
I could implement it myself pretty easily, but I'm hoping someone has already created something like this.
Advanced features would be useful. Things like:
Allowing a client to perform queries on application-specific properties like the number of users currently connected to the server
Distributing the server list across more than one machine
Timing out a server's entry in the list if it hasn't sent a heartbeat within some amount of time
Does anyone know of a service like this? I know there are open protocols and servers for doing local-LAN service discovery, but this would be a WAN service.
The protocols I could find that had any relevance to your intended application are these:
XRDS (eXtensible Resource Descriptor Sequence).
XMPP Service Discovery protocol.
The XRDS documentation is obtuse, but you may be able to push service descriptions in XML format. The service type specification might be generic, but I get a headache from trying to decipher committee-speak.
The XMPP Service Discovery protocol (part of the protocol Formerly Known As Jabber) also looked promising, but it seems that even though you could push your service description, they expect it to be one of the services mentioned on this list. Extending it would make it nonstandard.
Finally, I found something called seap (SErvice Announcement Protocol). It's old, it's rickety, the source may be propriety, it's written in C and Perl, it's a kludge, but it seems to do what you want, kind-of.
It seems like pushing a service announcement pulse is such an application-specific and trivial problem, that almost nobody has considered solving the general case.
My advice? Read the protocols and sources mentioned above for inspiration (I'd start with seap), and then write, implement, and publish a generic (probably xml-based) protocol yourself. All the existing ones seem to be either application-specific, incomprehensible, or a kludge.
Basically, you can write it yourself though I am not aware if anyone has one for public (I wrote one over 10 yrs ago, but for a company).
database (TableCols: auto-counter, svr_name, svr_ip, check_in_time, any-other-data)
code to receive heartbeat (http://<you-app.com>?svr_name=XYZ&svr_ip=P.Q.R.S)
code to list out servers within certain check_in_time
code to do some housecleaning once a while (eg: purge old records)
To send a heartbeat out, you only need to send a http:// call, on Linux use wget* with crontab, on windows use wget.exe with task scheduler.
It is application specific, so even if you wrote one yourself, others can't use it without modifying the source code.

Real-time application newbie - Node.JS + Redis or RabbitMQ -> client/server how?

I am a newbie to real-time application development and am trying to wrap my head around the myriad options out there. I have read as many blog posts, notes and essays out there that people have been kind enough to share. Yet, a simple problem seems unanswered in my tiny brain. I thought a number of other people might have the same issues, so I might as well sign up and post here on SO. Here goes:
I am building a tiny real-time app which is asynchronous chat + another fun feature. I boiled my choices down to the following two options:
LAMP + RabbitMQ
Node.JS + Redis + Pub-Sub
I believe that I get the basics to start learning and building this out. However, my (seriously n00b) questions are:
How do I communicate with the end-user -> Client to/from Server in both of those? Would that be simple Javascript long/infinite polling?
Of the two, which might more efficient to build out and manage from a single Slice (assuming 100 - 1,000 users)?
Should I just build everything out with jQuery in the 'old school' paradigm and then identify which stack might make more sense? Just so that I can get the product fleshed out as a prototype and then 'optimize' it. Or is writing in one over the other more than mere optimization? ( I feel so, but I am not 100% on this personally )
I hope this isn't a crazy question and won't get flamed right away. Would love some constructive feedback, love this community!
Thank you.
Architecturally, both of your choices are the same as storing data in an Oracle database server for another application to retrieve.
Both the RabbitMQ and the Redis solution require your apps to connect to an intermediary server that handles the data communications. Redis is most like Oracle, because it can be used simply as a persistent database with a network API. But RabbitMQ is a little different because the MQ Broker is not really responsible for persisting data. If you configure it right and use the right options when publishing a message, then RabbitMQ will actually persist the data for you but you can't get the data out except as part of the normal message queueing process. In other words, RabbitMQ is for communicating messages and only offers persistence as a way of recovering from network problems or system crashes.
I would suggest using RabbitMQ and whatever programming languages you are already familiar with. Since the M in LAMP is usually interpreted as MySQL, this means that you would either not use MySQL at all, or only use it for long term storage of data, not for the realtime communications.
The RabbitMQ site has a huge amount of documentation about building apps with AMQP. I suggest that after you install RabbitMQ, you read through the docs for rabbitmqctl and then create a vhost to experiment in. That way it is easy to clean up your experiments without resetting everything. I also suggest using only topic exchanges because you can emulate the behavior of direct and fanout exchanges by using wildcards in the routing_key.
Remember, you only publish messages to exchanges, and you only receive messages from queues. The exchange is responsible for pattern matching the message's routing_key to the queue's binding_key to determine which queues should receive a copy of the message. It is worthwhile learning the whole AMQP model even if you only plan to send messages to one queue with the same name as the routing_key.
If you are building your client in the browser, and you want to build a prototype, then you should consider just using XHR today, and then move to something like Kamaloka-js which is a pure Javascript implementation of AMQP (the AMQ Protocol) which is the standard protocol used to communicate to a RabbitMQ message broker. In other words, build it with what you know today, and then speed it up later which something (AMQP) that has a long term future in your toolbox.
Should I just build everything out with jQuery in the 'old school' paradigm and then identify which stack might make more sense? Just so that I can get the product fleshed out as a prototype and then 'optimize' it. Or is writing in one over the other more than mere optimization? ( I feel so, but I am not 100% on this personally )
This is usually called RAD (rapid application design/development) and it is what I would recommend right now. This lets you build the proof of concept that you can use to work off of later to get what you want to happen.
As for how to talk to the clients from the server, and vice versa, have you read at all on websockets?
Given the choice between LAMP or event based programming, for what you're suggesting, I would tell you to go with the event based programming, so nodejs. But that's just one man's opinion.
Well,
LAMP - Apache create new process for every request. RabbitMQ can be useful with many features.
Node.js - Uses single process to handle all request asynchronously with help of event looping. So, no extra overhead process creation like apache.
For asynchronous chat application,
socket.io + Node.js + redis pub-sup is best stack.
I have already implemented real-time notification using above stack.

Asynchronous socket programming

I'm creating an asynchronous socket programming in vb.net. i've utilised the code from Asynchronous client and server code from the following links:
http://msdn.microsoft.com/en-us/library/fx6588te.aspx for server program
The client program is present as per http://msdn.microsoft.com/en-us/library/bew39x2a.aspx.
When I try to connect the for more than one client the second client always waits until the first client completes the call. I want the clients to accept calls at the same time.
Does WCF help to make multiple clients to accept calls at the same time? If so what is WCF and how will it help. Or is there any other concept which can help?
Yes, WCF can help you there. But it implements only well known protocols like SOAP, WS-*, JSON, and a few proprietary ones like binary TCP binding.
You'd only use async socket programming if you need
High scalability (more than 20 simultaneous clients)
A custom protocol
If you build on top of HTTP, I recommend the HttpListener class
If you need a custom protocol with a few clients, use synchronous socket programming with multiple threads.
If you still want to implement a server with async sockets, then you need a continuous loop that accepts connections (after EndAccept() immediately call BeginAccept() again) and then start the BeginReceive()
I can tell you from experience though that debugging such a server is not easy. It's quite hard to follow the chain of events even through a detailed log file. Good luck with that :)

To poll or not to poll (in a web services context)

We can use polling to find out about updates from some source, for example, clients connected to a webserver. WCF provides a nifty feature in the way of Duplex contracts, in which, I can maintain a connection to a client, and make invocations on that connection at will.
Some peeps in the office were discussing the merits of both solutions, and I wanted to get feedback on when each strategy is best used.
I would use an event-based mechanism instead of polling. In WCF, you can do this easily by following the Publish-Subscribe framework that Juval Lowy provides at his website, IDesign.net.
Depends partly on how many users you have.
Say you have 1,000,000 users you will have problems maintaining that many sessions.
But if your system can respond to 1000 poll requests a second then each client can poll every 1000 seconds.
I think Shiraz nailed this one, but I wanted to say two more things.
I've had trouble with Duplex
contracts. You have to have all of
your ducks in a row with regards to
the callback channel... you have to
check it to make sure it's open,
etc. The IDesign.net stuff would be
a minimum amount of plumbing code
you'll have to include.
If it makes sense for your solution
(this is only appropriate in certain
situations), the MSMQ binding allows
a client to send data to a service
in an async manner (like Duplex),
but the service isn't "polling" for
messages... it gets notified when
one enters the queue through some
under-the-covers plumbing.
This sort of forces you to turn the
communication around (client becomes
server, server becomes client), but
if the majority of the communication
is one-way, this would provide a lot
of benefits. The other advantage
here is obviously the queued
communication - the server can be
down and not miss any messages...
it'll pick 'em up when it comes back
online.
Something to think about.