Implement long-polling API with Symfony - api

I am trying to implement an API which uses the long-polling concept in Symfony framework.
Let's say that I have a table 'feeds' which can only grow (assume that users can insert thier feed from other interface).
I want to create a client-side real-time updated page. The idea is the following:
Client send an ajax request with timestamp of last modification (first time sends 0)
Server compares timestamp of client to timestamp, to retrieve all messages with bigger timestamp than the one sent by user
If there are newer messages, return them immediately to the client, with the timestamp of the latest one
On other hand, if there are no new messages, enter into a 2 minutes busy-wait loop, checking every 1-3 seconds (randomly) whether there are new messages.
When client receive servers answer, browser updates view and immediately sends a new ajax request.
In other words, instead of send an AJAX call every x seconds, the server holds the request till it has new information for us.
Having good experience with Symfony I tried to implement a simple demo of this api, and it works great. I had a problem of session blocking (the ajax call is held so access to the server is not possible), so I simply added the following to the action:
public function executeIndex(sfWebRequest $request)
{
session_write_close();
:
:
(see also this link)
Then I testes massive access to the API. 100 users works fine, 1000 everything crashes.
I realized that I have two problems:
For each access a new DB connection is opened
For each access the server executes a new process
For the first problem I tried to put persistent: true In my database.yml Doctrine connetor. When I monitored the server connections I saw that still each access to the API opens a new connection. So basically I am still blocked with the same two problems.
Does anyone have any idea or experience with this issue?? Or maybe I should give-up the idea of implementing my api with Symfony??

I think using symfony for this, is the wrong approach. Using Sockets would be much easier.
For example have a look at nodejs or ape-project (comet)
they both are able to handle much more current users than apache, lighttpd or nginx...

Apache creating different threads for each user and each thread have a separate database connection. that's why the db connection are high

Related

How to handle the application if connection breaks in between a web service call

In several interviews I have been asked about handling of connection, web service calls, server responses and all. Even now I am not clear about many things.Could you please help me to get a better idea about the following scenarios?
What is the advantage of using NSURLSessionDataTask instead of NSURLConnection-I have an idea like data loss will not happen even if the connection breaks for NSURLSessionDataTask but not for the latter.But how it works?
If the connection breaks after sending the request to a server or while connecting to server , How can we handle the code at our end in case of NSURLConnection and NSURLSessionDataTask?-My idea is to use Reachability classes and check when it becomes online.
The data we are sending got updated at the server side. But we don't get the response from server. What can we do at our side to handle this situation?- Incrementing timeOutInterval is the only thing that we can do?
Please help me with these scenarios. Thank you very much in advance!!
That's multiple questions, really, but I'll try to answer them all briefly.
Most failure handling is the same between NSURLConnection and NSURLSession. The main advantages of the latter are support for background downloads and cancelling groups of related requests.
That said, if you're doing a large download that you think might fail, NSURLSession does provide download tasks that let you resume the download if your network connection fails, similar to what NSURLDownload used to do on OS X (never available on iOS). This only helps for downloading large files, though, not for large uploads (which require significant server-side support to resume) or other requests.
Your intuition is correct. When a connection fails, create a reachability object monitoring that particular hostname to see when it would be a good time to try the request again. Then, try the request again.
You might also display some sort of advisory UI to say that you have no Internet connection. (By advisory, I mean something that the user doesn't have to click on and that does not impact offline use of the app any more than necessary; look at the Facebook app for a great example.)
Provide a unique identifier when you make the request, and store that on the server along with the server's response until the client acknowledges receipt of the response (or purge it anyway after some reasonable number of days). When the upload finishes, the server gives you back its response if it can.
If something goes wrong, the client asks the server to resend the response associated with that unique identifier. Once your client has the data, it acknowledges receipt and the server deletes the response. If you ask the server for the response and it doesn't have one, then the upload didn't really complete.
With some additional work, this approach can make it possible to support long-running uploads more reliably. If an upload fails, ask the server how much data it got for that identifier, then tell the server that you're going to upload new data starting at the next byte. On the server side, overwrite the old data starting at that byte (just in case some data was still being written when you asked for the length).
Hope that helps.

Laravel Api response

I am new to Laravel and Api development, i am facing a problem, the workflow of my api is, a user sends post data to api, then api takes that data and processes the data to databases, now there is a process in which php waits for 30 min. while inserting data into two different tables.
The problem is as far as i know after that process is complete then only i can send json response back to user. but this way user has to wait for 30 minute.
Is there a way that process that takes 30 min do work in background and send the response json immediately when that process started ?
1) I studied about queues but the web server i will be hosting will not give me access to server as a whole to install something, it will only give me space for my files.
I am confused how to achieve this functionality, so that user do not have to wait much for Response.
I will really appreciate.
Thanks,
You can use the queue without any server installation. All your configuration goes in the config/queue.php file.
You can use
Amazon SQS: aws/aws-sdk-php ~3.0
Beanstalkd: pda/pheanstalk ~3.0
Redis: predis/predis ~1.0
Read more here: https://laravel.com/docs/5.2/queues#introduction

trigger a function when changes in the DB are being commited

I was always wondering if it's possible to create a block of code (probably php code) that will execute when a certain change is being committed to the database.
For instance, chat application. When a user sends a message, it will add a message to a table, then I would like to force all of the other users to an AJAX request to read this new value (rather than sending AJAX request every 100ms to check if there is a new message)
I remember something that involved node.js and some other type of DB rather than mysql. If this is the only solution, can it work along with a normal mysql database?
Thanks in advance!
Yes, MySQL supports triggers, but they are pretty much limited to do other data operations. So you'd still have to get some notification sent to your javascript client
A better way of doing client notifications with with websocket or comet, allowing the server to push notifications from a message-queue.
You didn't give much detail about your programming environment, so I'll leave it to you to follow the tag links I gave above, and research the appropriate tools and frameworks for using these general methods.
Re your comment:
For PHP, here's an example "push" chat application:
http://www.aljtmedia.com/blog/websockets-for-php-ratchet-push-chat-application/
Here's an primer on using message queues in general:
http://blog.thecodepath.com/2013/01/06/asynchronous-processing-in-web-applications-part-2-developers-need-to-understand-message-queues/
And here are tutorials for RabbitMQ (one simple option among many MQ solutions usable by PHP), including PHP examples: https://www.rabbitmq.com/getstarted.html

Multiple calls to service not handling all requests

I have a Silverlight app that uses WCF for communication with my SQL Server via Entity Framework. When I send multiple requests to the service it fails to send all. If I use the Callback event and send each request as the previous one completes all is well. How can I get this to work without this workaround?
Edited:
For i As Integer = 0 To someNumberOfTimesInLoop
serv.CloseElement_IncAsync(Params...)
Next
So I wonder if it's a concurrency problem as they hit at the same time and the Id fields for these tables are not incremented in time?
I have added this code in my App.xaml.vb based on this blog:
WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp)
Any ideas?

How to design a report request from client machines to be run on an available server

I have a vb.net 2.0 winforms project that is full of all kinds of business reports (generated with Excel interop calls) that can be run "on-demand". Some of these reports filter through lots of data and take a long time to run - especially on our older machines around the office.
I'd like to have a system where a report request can be made from the client machines, some listener sees it, locates a server with low-load, runs the report on that server, and emails the result to the user that requested it.
How can I design such a change? All our reports take different parameters, and I can't seem to figure out how to deal with this. Does each generator need to inherit from a "RemoteReport" class that does this work? Do I need to use a service on one of our servers to listen for these requests?
One approach you could take is to create a database that the clients can connect to, and have the client add a record that represents a report request, including the necessary parameters which could be passed in an xml field.
You can then have a service that periodically checks this database for new requests, and depending on how many other requests are current processing, submit the request to the least busy server.
The server would then be able to run the report and email the file to the user.
This is by no means a quick solution and will likely take some time to design the various elements and get them to work together, but its not impossible, especially considering that it has the possibility to scale rather well (adding more available/more powerful servers).
I developed a similar system where a user can submit a request for data from a web interface, that would get picked up by a request manager service that would delegate the request to the appropriate server based on the type of request, while providing progress indication to the client.
How about write a web service that accepts reporting requests. On completion the reports could be emailed to the users. The web service can provide a Status method that allows your WinForms app to interrogate the current status of the report requests.