Update HttpResponse Every Few Seconds - sql

My application in Django can create some very big SQL queries. I currently use a HttpRequest object, for the data I need, then a HttpResponse, to return what I want to show the User.
Obviously, I can let the User wait for a minute whilst these many sets of queries are being executed and extracted from the database, then return this monolothic HTML page.
Ideally, I'd like to update the page when I want, something like:
For i,e in enumerate(example):
Table.objects.filter(someObjectForFilter[i]).
#Return the object to the page.
#Then Loop again, 'updating' the response after each iteration.
Is this possible?

I discovered recently that an HttpResponse can be a generator:
def myview(request, params):
return HttpResponse(mygenerator(params))
def mygenerator(params):
for i,e in enumerate(params):
yield '<li>%s</li>' % Table.objects.filter(someObjectForFilter[i])
This will progressively return the results of mygenerator to the page, wrapped in an HTML <li> for display.

Your approach is a bit flawed. You have a few different options.
The first is probably the easiest - use AJAX and HTTPRequest. Have a series of these, each of which results in a single Table.objects.filter(someObjectForFilter[i]).. As each one finishes, the script completes and returns the results to the client. The client updates the UI and initiates the next query via another AJAX call.
Another method is to use a batch system. This is a bit heftier, but probably a better design if you're going for real "heavy lifting" in the database. You'll need to have a batch daemon running (a cron probe works just fine for this) scanning for incoming tasks. The user wants to perform something, so their request submits this task (it could simply be a row in a database with their paramters). The daemon grabs it, processes it completely offline - perhaps even by a different machine - and updates the task row when it's complete with the results. The client can then refresh periodically to check the status of that row, via traditional or AJAX methods.

Related

Capture start of long running POST VB.net MVC4

I have a subroutine in my Controller
<HttpPost>
Sub Index(Id, varLotsOfData)
'Point B.
'By the time it gets here - all the data has been accepted by server.
What I would like to do it capture the Id of the inbound POST and mark, for example, a database record to say "Id xx is receiving data"
The POST receive can take a long time as there is lots of data.
When execution gets to point B I can mark the record "All data received".
Where can I place this type of "pre-POST completed" code?
I should add - we are receiving the POST data from clients that we do not control - that is, it is most likely a client's server sending the data - not a webbrowser client that we have served up from our webserver.
UPDATE: This is looking more complex than I had imagined.
I'm thinking that a possible solution would be to inspect the worker processes in IIS programatically. Via the IIS Manager you can do this for example - How to use IIS Manager to get Worker Processes (w3wp.exe) details information ?
From your description, you want to display on the client page that the method is executing and you can show also a loading gif, and when the execution completed, you will show a message to the user that the execution is completed.
The answer is simply: use SignalR
here you can find some references
Getting started with signalR 1.x and Mvc4
Creating your first SignalR hub MVC project
Hope this will help you
If I understand your goal correctly, it sounds like HttpRequest.GetBufferlessInputStream might be worth a look. It allows you to begin acting on incoming post data immediately and in "pieces" rather than waiting until the entire post has been received.
An excerpt from Microsoft's documentation:
...provides an alternative to using the InputStream propertywhich waits until the whole request has been received. In contrast, the GetBufferlessInputStream method returns the Stream object immediately. You can use the method to begin processing the entity body before the complete contents of the body have been received and asynchronously read the request entity in chunks. This method can be useful if the request is uploading a large file and you want to begin accessing the file contents before the upload is finished.
So you could grab the beginning of the post, and provided your client-facing page sends the ID towards the beginning of its transmission, you may be able to pull that out. Of course, this would be reading raw byte data which would need to be decoded so you could grab the inbound post's ID. There's also a buffered one that will allow the stream to be read in pieces but will also build a complete request object for processing once it has been completely received.
Create a custom action filter,
Action Filters for executing filtering logic either before or after an action method is called. Action Filters are custom attributes that provide declarative means to add pre-action and post-action behavior to the controller's action methods.
Specifically you'll want to look at the
OnActionExecuted – This method is called after a controller action is executed.
Here are a couple of links:
http://www.infragistics.com/community/blogs/dhananjay_kumar/archive/2016/03/04/how-to-create-a-custom-action-filter-in-asp-net-mvc.aspx
http://www.asp.net/mvc/overview/older-versions-1/controllers-and-routing/understanding-action-filters-vb
Here is a lab, but I think it's C#
http://www.asp.net/mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-custom-action-filters

Ensure a web server's query will complete even if the client browser page is closed

I am trying to write a control panel to
Inform about certain KPIS
Enable the user to init certain requests / jobs by pressing a button that then runs a stored proc on the DB or sets a specific setting etc
So far, so good, except I would like to run some bigger jobs where the length of time that the job is running for is unknown and could run over both the script timeout period AND the time the user is willing to wait for a response.
What I want is a "fire and forget" process so the user hits the button and even if they kill the page or turn off their phone they know the job has been initiated and WILL complete.
I was looking into C# BeginExecuteNonQuery which is an async call to the query so the proc is fired but the control doesn't have to wait for a response from it to carry on. However I don't know what happens when the page/app that fired it is shut.
Also I was thinking of some sort of Ajax command that fires the code in a page behind the scenes so the user doesn't know about it running but then again I believe if the user shuts the page down the script will die and the command will die on the server as well.
The only way for certain I know of is a "queue" table where jobs are inserted into this table then an MS Agent job comes along every minute or two checking for new inserts and then runs the code if there is any. That way it is all on the DB and only a DB crash will destroy it. It won't help with multiple jobs waiting to be run concurrently that both take a long time but it's the only thing I can be sure of that will ensure the code is run at all.
Any ideas?
Any language is okay.
Since web browsers are unconnected, requests from them always take the full amount of time. The governing factor isn't what the browser does, but how long the web site itself will allow an action to continue.
IIS (and in general, web servers) have a timeout period for requests, where if the work being done takes simply too long, the request is terminated. This would involve abruptly stopping whatever is taking so long, such as a database call, running code, and so on.
Simply making your long-running actions asynchronous may seem like a good idea, however I would recommend against that. The reason is that in ASP and ASP.Net, asynchronously-called code still consumes a thread in a way that blocks other legitimate request from getting through (in some cases you can end up consuming two threads!). This could have performance implications in non-obvious ways. It's better to just increase the timeout and allow the synchronously blocking task to complete. There's nothing special you have to do to make such a request complete fully, it will occur even if the sender closes his browser or turns off his phone immediately after (presuming the entire request was received).
If you're still concerned about making certain work finish, no matter what is going on with the web request, then it's probably better to create an out-of-process server/service that does the work and to which such tasks can be handed off. Your web site then invokes a method that, inside the service, starts its own async thread to do the work and then immediately returns. Perhaps it also returns a request ID, so that the web page can check on the status of the requested work later through other methods.
You may use asynchronous method and call the query from this method.
Your simple method can be changed in to a asynch method in the following manner.
Consider that you have a Test method to be called asynchronously -
Class AsynchDemo
{
public string TestMethod(out int threadId)
{
//call your query here
}
//create a asynch handler delegate:
public delegate string AsyncMethodCaller(out int threadId);
}
In your main program /or where you have to call the Test Method:
public static void Main()
{
// The asynchronous method puts the thread id here.
int threadId;
// Create an instance of the test class.
AsyncDemo ad = new AsyncDemo();
// Create the delegate.
AsyncMethodCaller caller = new AsyncMethodCaller(ad.TestMethod);
// Initiate the asychronous call.
IAsyncResult result = caller.BeginInvoke(
out threadId, null, null);
// Call EndInvoke to wait for the asynchronous call to complete,
// and to retrieve the results.
string returnValue = caller.EndInvoke(out threadId, result);
Console.WriteLine("The call executed on thread {0}, with return value \"{1}\".",
threadId, returnValue);
}
From my experience a Classic ASP or ASP.NET page will run until complete, even if the client disconnects, unless you have something in place for checking that the client is still connected and do something if they are not, or a timeout is reached.
However, it would probably be better practice to run these sorts of jobs as scheduled tasks.
On submitting your web page could record in a database that the task needs to be run and then when the scheduled task runs it checks for this and starts the job.
Many web hosts and/or web control panels allow you to create scheduled tasks that call a URL on schedule.
Alternately if you have direct access to the web server you could create a scheduled task on the server to call a URL on schedule.
Or, if ASP.NET, you can put some code in global.asax to run on a schedule. Be aware though, if your website is set to stop after a certain period of inactivity then this will not work unlesss there is frequent continuous activity.

mvc4 passing data between paging action and views

i have an action which will invoke a service (not database)to get some data for display,and i want to do paging on these data.however,every time a second page is clicked,it will invoke this action and of course invoke the service again,actually when i click the first page link,it already generate the whole data including what the second page needs. i just want to invoke the service once and get all the data,and later when paging,i don't need to invoke the service again,how can i deal with that?hope someone could give me a hint~
There are several ways to address this. If it's practical and a limited amount of data, it's ok to return the entire data set in the first request.
If that's your case I would consider returning a pure JSON object when you load the page initially. You can then deserialize this into a JS object variable on the web page that you can perform your paging operations against. This is an example of client side paging where all the data exists client side.
Another approach is to do Ajax based paging where you request the data for the next page as needed. I would still recommend returning JSON in this scenario as well.
The two approaches differ in that the first one returns all the data upfront, whereas the second one only returns what you need to render any given page.

How to update file upload messages using backbone?

I am uploading multiple files using javascript.
After I upload the files, I need to run several processing functions.
Because of the processing time that is required, I need a UI on the front telling the user the estimated time left of the entire process.
Basically I have 3 functions:
/upload - this is an endpoint for uploading the files
/generate/metadata - this is the next endpoint that should be triggered after /upload
/process - this is the last endpoint. SHould be triggered after /generate/metadata
This is how I expect the screen to look like basically.
Information such as percentage remaining and time left should be displayed.
However, I am unsure whether to allow server to supply the information or I do a hackish estimate solely using javascript.
I would also need to update the screen like telling the user messages such as
"currently uploading"
if I am at function 1.
"Generating metadata" if I am at function 2.
"Processing ..." if I am at function 3.
Function 2 only occurs after the successful completion of 1.
Function 3 only occurs after the successful completion of 2.
I am already using q.js promises to handle some parts of this, but the code has gotten scarily messy.
I recently come across Backbone and it allows structured ways to handle single page app behavior which is what I wanted.
I have no problems with the server-side returning back json responses for success or failure of the endpoints.
I was wondering what would be a good way to implement this function using Backbone.js
You can use a "progress" file or DB entry which stores the state of the backend process. Have your backend process periodically update this file. For example, write this to the file:
{"status": "Generating metadata", "time": "3 mins left"}
After the user submits the files have the frontend start pinging a backend progress function using a simple ajax call and setTimeout. the progress function will simply open this file, grab the JSON-formatted status info, and then update the frontend progress bar.
You'll probably want the ajax call to be attached to your model(s). Have your frontend view watch for changes to the status and update accordingly (e.g. a progress bar).
Long Polling request:
Polling request for updating Backbone Models/Views
Basically when you upload a File you will assign a "FileModel" to every given file. The FileModel will start a long polling request every N seconds, until get the status "complete".

In the new ASP.NET Web API, how do I design for "Batch" requests?

I'm creating a web API based on the new ASP.NET Web API. I'm trying to understand the best way to handle people submitting multiple data-sets at the same time. If they have 100,000 requests it would be nice to let them submit 1,000 at a time.
Let's say I have a create new Contact method in my Contacts Controller:
public string Put(Contact _contact)
{
//add new _contact to repository
repository.Add(_contact);
//return success
}
What's the proper way to allow users to "Batch" submit new contacts? I'm thinking:
public string BatchPut(IEnumerable<Contact> _contacts)
{
foreach (var contact in _contacts)
{
respository.Add(contact);
}
}
Is this a good practice? Will this parse a GET request with a JSON array of Contacts (assuming they are correctly formatted)?
Lastly, any tips on how best to respond to Batch requests? What if 4 out of 300 fail?
Thanks a million!
When you PUT a collection, you are either inserting the whole collection or replacing an existing collection as if it was a single resource. It is very similar to GET, DELETE or POST a collection. It is an atomic operation. Using is as a substitute for individual calls to PUT a contact may not be very RESTfull (but that is really open for debate).
You may want to look at HTTP pipelining and send multiple PutContact requests of the same socket. With each request you can return standard HTTP status for that single request.
I implemented batch updates in the past with SOAP and we encountered a number of unforeseen issues when the system was under load. I suspect you will run into the same issues if you don't pay attention.
For example, the database may timeout in the middle of the batch update and the all hell broke loose in terms of failures, reliability, transactions etc. And the poor client had to figure out what was actually updated and try again.
When there was too many records to update, the HTTP request would time out because we took too long. That opened another can of worms.
Another concern was how much data would we accept during the update? Was 10MB of contacts enough? Perhaps 1MB? Larger buffers has numerous implications in terms of memory usage and security.
Hence my suggestion to look at HTTP pipelining.
Update
My suggestion would to handle batch creation of contacts as an async process. Just assume that a "job" is the same as a "batch create" process. So the service may look as follows:
public class JobService
{
// Post
public void Create(CreateJobRequest job)
{
// 1. Create job in the database with status "pending"
// 2. Save job details to disk (or S3)
// 3. Submit the job to MSMQ (or SQS)
// 4. For 20 seconds, poll the database to see if the job completed
// 5. If the job completed, return 201 with a URI to "Get" method below
// 6. If not, return 202 (aka the request was accepted for processing, but has not completed)
}
// Get
public Job Get(string id)
{
// 1. Fetch the job from the database
// 2. Return the job if it exists or 404
}
}
The background process that consumes stuff from the queue can update the database or alternatively perform a PUT to the service to update the status of Job to running and completed.
You'll need another service to navigate through the data that was just processed, address errors and so forth.
You background process may be need to be tolerant of validation errors. If not, or if your service does validation (assuming you are not doing database calls etc for which response times cannot be guaranteed), you can return a structure like CreateJobResponse that contains enough information for your client to fix the issue and resubmit the request. If you have to do some validation that is time consuming, do it in the background process, mark the job as failed and update the job with the information that will allow a client to fix the errors and resubmit the request. This assumes that the client can do something with the fact that the job failed.
If the Create method breaks the job request into many smaller "jobs" you'll have to deal with the fact that it may not be atomic and pose numerous challenges to monitor whether jobs completed successfully.
A PUT operation is supposed to replace a resource. Normally you do this against a single resource but when doing it against a collection that would mean you replace the original collection with the set of data passed. Not sure if you are meaning to do that but I am assuming you are just updating a subset of the collection in which case a PATCH method would be more appropriate.
Lastly, any tips on how best to respond to Batch requests? What if 4 out of 300 fail?
That is really up to you. There is only a single response so you can send a 200 OK or a 400 Bad Request and put the details in the body.