I've implemented a controller method which makes a couple of requests to an third parry API, which is quite slow. Further I've utilized one of Thin's asynchronous features:
# This informs thin that the request will be handled asynchronously
self.response_body = ''
self.status = -1
Thread.new do
# This will be the response to the client
env['async.callback'].call('200', {}, "Response body")
end
(blog post about it)
However I'm curious if this could be implemented without using Thin, or to be more precise if that could be accomplished with Apache/Phusionpassenger.
Any suggestions, pointers, links, comments or answers are appreciated. Thanks
Not sure wether this is possible now with passenger 4. In this article They announced having made a complete
redesign to support the evented model. As they also do have plans to support Node.js, I would expect the above method to work.
However if you look at this post from them, they clearly say:
... There is another way to support high I/O concurrency though: multi-threading ...
And so this leaves multithreaded servers as the only serious options for handling streaming support in Rails apps....
Rails is just not designed for the evented process model, but it supports the multi-threaded model quite well. And multithreaded sutup can be achieved with passenger-enterprise.
Another option might be to extract this problem to another application (see Railscast).
So for example instead of directly calling a 3d party api in you controller, which will spend the most time on blocking the I/O call, you solve this by processing this request in a backround job. The user will get an immidiate reponse and after that directly subscribe on some faye message channel. In your background job, when the 3d party call is ready, you publish the response on this channel to faye.
PROFIT.
Related
I'm designing a REST Api for a testing software and I have a doubt. I searched a lot but it's not clear enough for me. My scenario is a queue which contains multiple jobs to be printed. These jobs are complex objects and the printing workflow is another complex action. I don't know which operation fit best to this. According to this, it should be a POST?
http://restful-api-design.readthedocs.org/en/latest/methods.html
In this case my action will fit better into an RPC model but we need to use REST according to that 95% of actions fits perfectly to this model.
In case that is a POST I must send the queue that I want to print inside the body?
Thank you so much.
I don't know what you want to expose through your REST API but I would think about this.
You could expose a resource with path /printjobs that corresponds to the print queue. Using a method POST would add a job in the queue. The returned status code would be 202 Accepted since it's something asynchronous and return an identifier for the new job.
Something in background would be responsible to handle job in the queue. I think that it's something different from the REST API.
Then you could use a resource /printjobs/{id} that will give you hints about the status of the job (method GET), suppress it (method DELETE) and update its status (for example to suspend it with method PUT or PATCH)/
Hope it helps you,
Thierry
I have a big Domino Web application, which uses numerous calls "OpenAgent" to Java agents to retrieve data via ajax. The application is used by several users.
What are the main parameters that you advise me to check and adjust on server, in order to avoid HTTP hang or performance issues?
There is quite an overhead in calling to an agent be it LotusScript or Java. So if your AJAX calls are quite frequent you are going to overload the server easily.
Domino comes with a test tool for this called Server.Load. It will allow you to emulate a heavy load server and you will see how your code performs under that. Another I've used is Rational Functional Tester (trial version), but there are probably free ones out there as well (eg. JMeter/LoadRunner. I haven't used).
So if you are doing infrequent complex actions that may take time and don't need a quick response to the user, I would recommend to continue with the web agent.
If it is simple look up calls I would recommend to use alternative methods. For example XPages has the AJAX functionality built into it with scaling in mind. Or if it is JSON data then look into Domino Data Service, or Domino URL commands.
I have a Backbone application, which has a collection called Links. Links maps to a REST API URI of /api/links.
The API will give the user the latest links. However, I have a system in place that will add a job to the message queue when the user hits this API, requesting that the links in the database are updated.
When this job is finished, I would to push the new links to the Backbone collection.
How should I do this? In my mind I have two options:
From the Backbone collection, long poll the API for new links
Setup WebSockets to send a "message" to the collection when the job is done, sending the new data with it
Scrap the REST API for my application and just use WebSockets for everything, as I am likely to have more realtime needs later down the line
WebSockets with the REST API
If I use WebSockets, I'm not sure of the best way to integrate this into my Backbone collection so that it works alongside the REST API.
At the moment my Backbone collection looks like this:
var Links = Backbone.Collection.extend({
url: '/api/links'
});
I'm not sure how to enable the Backbone collection to handle AJAX and WebSockets. Do I continue to use the default Backbone.sync for the CRUD Ajax operations, and then deal with the single WebSocket connection manually? In my mind:
var Links = Backbone.Collection.extend({
url: '/api/links',
initialize: function () {
var socket = io.connect('http://localhost');
socket.on('newLinks', addLinks)
},
addLinks: function (data) {
// Prepend `data` to the collection
};
})
Questions
How should I implement my realtime needs, from the options above or any other ideas you have? Please provide examples of code to give some context.
No worries! Backbone.WS got you covered.
You can init a WebSocket connection like:
var ws = new Bakcbone.WS('ws://exmaple.com/');
And bind a Model to it like:
var model = new Backbone.Model();
ws.bind(model);
Then this model will listen to messages events with the type ws:message and you can call model.send(data) to send data via that connection.
Of course the same goes for Collections.
Backbone.WS also gives some tools for mapping a custom REST-like API to your Models/Collections.
My company has a fully Socket.io based solution using backbone, primarily because we want our app to "update" the gui when changes are made on another users screen in real time.
In a nutshell, it's a can of worms. Socket.IO works well, but it also opens a lot of doors you may not be interested in seeing behind. Backbone events get quite out of whack because they are so tightly tied to the ajax transactions...you're effectively overriding that default behavior. One of our better hiccups has been deletes, because our socket response isn't the model that changed, but the entire collection, for example. Our solution does go a bit further than most, because transactions are via a DDL that is specifically setup to be universal across the many devices we need to be able to communicate with, now and in the future.
If you do go the ioBind path, beware that you'll be using different methods for change events compared to your non-socket traffic (if you mix and match) That's the big drawback of that method, standard things like "change" becomes "update" for example to avoid collisions. It can get really confusing in late-night debug or when you have a new developer join the team. For that reason, I prefer either going sockets, or not, not a combination. Sockets have been good so far, and scary fast.
We use a base function that does the heavy lifting, and have several others that extend this base to give us the transaction functionality we need.
This article gives a great starter for the method we used.
I've been fighting and fighting for some time with a decent way to handle a workflow based on a series of asynchronous ASIHTTPRequests (I am using queues). So far it seems to have eluded me and I always end with a hideous mess of delegate calls and spaghetti code exploding all over my project.
It works as follows:
Download a list of items (1 single ASIHTTPRequest, added to a queue).
The items retrieved in step 1 need to be stored.
Each item, from 1 is then parsed, queuing a 1 ASIHTTPRequest per item, for it's sub-items.
Each of the requests from step 3 are processed and the sub-items stored.
I need to be able to update the UI with the progress %age and messages.
I'm unable for the life of me to figure out a clean/maintainable way of doing this.
I've looked at the following links:
Manage Multiple Asynchronous Requests in iOS with ASINetworkQueue
Sync-Async Pair Pattern Easy Concurrency on iOS
But either I'm missing something, or they don't seem to adequately describe what I'm trying to achieve.
Could I use blocks?
I see myself facing a quite similar issue as I got the exercise to work on a app using a set of async http and ftp handlers in a set of process and workflows.
I'm not aware about ASIHTTP API but I assume I did something similar.
I defined a so called RequestOperationQueue which can for example represent all request operations of a certain workflow. Also I defined several template operations for example FTPDownloadOperation. And here comes the clue. I implemented all these RequestOperations more or less accroding to the idea of http://www.dribin.org/dave/blog/archives/2009/05/05/concurrent_operations/. Instead of implementing the delegate logic in the operation itself I implemented sth like callback handlers specialized for the different protocols (http, ftp, rsync, etc) providing a status property for the certain request which can be handled by the operation via KVO.
The UI can be notified about the workflow for example by a delegate protocol for RequestOperationQueue. for example didReceiveCallbackForRQOperation:(RequestOperation) rqo.
From my point of view the coding of workflows including client-server operations gets quite handy with this approach.
I've just started development on Macs and have found Cocoa to be a useful and thoughtful framework, but its HTTP functionality has me puzzled.
I have an NSURLConnection object to download a file from my webserver using the HTTP GET method. NSURLConnect's asynchronous connection is great, I get plenty of feedback, I get each chunk received as a new NSData object that I can use to atomically rebuild the file on the client end and, importantly, provide the user with a progress report: [myData length].
Uploads, however, are nowhere near as neat. You can either stick a synchronous request in its own thread or call an asynchronous request (which I believe spawns its own thread), but neither provide you with any useful feedback. There's no delegates to request data or even let me know when data is being sent. Presumably this limits me to files smaller than available memory.
My question is, therefore, is there a simple and elegant solution to HTTP POST file uploads using Cocoa that provides a good deal of feedback and the ability to read files part-by-part, rather than all at once? Or should I write my own class from low-level networking functionality?
Thanks!
You may want to look at the ASIHTTPRequest framework. I haven't used it for uploading but it looks like it has more feedback and the usage is pretty straightforward.
I decided to go with CFNetwork functions instead of NSURLConnection. There appears to be a bit more flexibility in async notifications and in specific features (authentication for instance). Unfortunately it's a bit more complicated (run loops for instance blow my mind) so I recommend you read the CFNetwork reference guide if you go this route:
http://developer.apple.com/documentation/Networking/Conceptual/CFNetwork/Introduction/Introduction.html
Here's a snippet of code from my POST routine, FWIW:
// Create our URL
CFStringRef url = CFSTR("Submit");
CFURLRef myURL = CFURLCreateWithString(kCFAllocatorDefault, url, baseUrl);
// Create the message request (POST)
CFStringRef requestMethod = CFSTR("POST");
CFHTTPMessageRef myRequest = CFHTTPMessageCreateRequest(kCFAllocatorDefault, requestMethod, myURL, kCFHTTPVersion1_1);
// Connect the read socket to the HTTP request stream
CFReadStreamRef myReadStream = CFReadStreamCreateForStreamedHTTPRequest(kCFAllocatorDefault, myRequest, readStream);
// TODO: why does this have to be done?
succ &= CFReadStreamSetClient(myReadStream,
kCFStreamEventOpenCompleted | kCFStreamEventCanAcceptBytes | kCFStreamEventErrorOccurred | kCFStreamEventEndEncountered,
(CFReadStreamClientCallBack) &MyReadCallBack, &myClientContext);
CFReadStreamScheduleWithRunLoop(myReadStream, CFRunLoopGetCurrent(), kCFRunLoopDefaultMode);
succ &= CFReadStreamOpen(myReadStream);
ASIHTTPRequest was originally designed just for this purpose (tracking POST progress), since in the 2.x API, this isn't possible with NSURLConnection. It will definitely be easier to integrate than rolling your own with CFNetwork, and you get lots of other stuff for free (e.g. progress tracking across multiple requests, resuming downloads etc). :)
If the files you are uploading are large, be sure to look at the options for streaming directly from disk, so you don't have to hold the data in memory.
Unfortunately, you're quite correct that NSURLConnection is weak here. The most flexible approach that I would recommend is CocoaAsyncSocket. It means rolling your own HTTP, which is unfortunate, but in most cases not that difficult. CocoaHTTPServer demonstrates how to build a full HTTP server on top of CocoaAsyncSocket, and may have a lot of useful code for your problem. I've found both of these very useful.
Another approach that may be worth investigating is WebKit. Create an invisible WebView, and loadRequest: a POST. I haven't dug into whether the estimatedChange notification system includes the time to upload or only the time to download, but it's worth a try.
You can take a look at the HTTPMessage section of my toolkit repository on github for a simple ObjC wrapper around CFHTTPMessageRef; among other things it'll hand you an NSInputStream object, which saves you thinking about plain-C callback functions.
Depending on what you're reading, you may want to take a look at the StreamingXMLParser section of the same repository for an XML (and HTML) parser which will parse data directly from said NSInputStream on your behalf.