Laravel Api response - api

I am new to Laravel and Api development, i am facing a problem, the workflow of my api is, a user sends post data to api, then api takes that data and processes the data to databases, now there is a process in which php waits for 30 min. while inserting data into two different tables.
The problem is as far as i know after that process is complete then only i can send json response back to user. but this way user has to wait for 30 minute.
Is there a way that process that takes 30 min do work in background and send the response json immediately when that process started ?
1) I studied about queues but the web server i will be hosting will not give me access to server as a whole to install something, it will only give me space for my files.
I am confused how to achieve this functionality, so that user do not have to wait much for Response.
I will really appreciate.
Thanks,

You can use the queue without any server installation. All your configuration goes in the config/queue.php file.
You can use
Amazon SQS: aws/aws-sdk-php ~3.0
Beanstalkd: pda/pheanstalk ~3.0
Redis: predis/predis ~1.0
Read more here: https://laravel.com/docs/5.2/queues#introduction

Related

Test async data processing flows with Karate Labs

I'm looking for best practices or the recommended approach to test async code execution with Karate.
Our use cases are all pretty similar but a basic one is:
Client makes HTTP request to API
API accepts request and creates a messages which is added to a queue
API replies with ACCEPTED / 202
Worker picks up message from queue processes it and updates database
Eventually after the work is finished another endpoint delivers updated data
How can I check with Karate that after processing has finished other endpoints return the correct result?
Concrete real life example:
Client requests a processing intensive data export to API e.g. via HTTP POST /api/export
API create message with information for creating the export and puts it on AWS SQS queue
API replies with 202
Worker receives message and creates export, uploads result as ZIP to S3 and finally creates and entry in the database symbolizing this export
Client can now query list exports endpoint e.g. via HTTP GET /api/exports
API returns 200 with the list of exports incl. the newly created entry
Generally I have two ideas on how to approach this:
Use karate retry until on the endpoint that returns the list of exports
In the API response (step #3) return the message ID and use the HTTP API of SQS to poll that endpoint until the message has been processed and then query the list endpoint to check the result
Is either of those approach recommended or should I choose an entirely different solution?
The moment queuing comes into the picture, I would not recommend retry until. It would work if you are in a hurry, but if you are ok to write a little bit of Java code, please read on. Note that this Java "glue code" needs to be written only once, and then the team responsible for writing the functional flows will be up and running.
I personally would prefer option (2) just because when a test fails, you will have a lot more diagnostic information and traces to look at.
Pretty sure you won't have a problem using AWS Java libs to do things such as polling SQS.
I think this example will answer all your questions: https://twitter.com/getkarate/status/1417023536082812935

How to handle the application if connection breaks in between a web service call

In several interviews I have been asked about handling of connection, web service calls, server responses and all. Even now I am not clear about many things.Could you please help me to get a better idea about the following scenarios?
What is the advantage of using NSURLSessionDataTask instead of NSURLConnection-I have an idea like data loss will not happen even if the connection breaks for NSURLSessionDataTask but not for the latter.But how it works?
If the connection breaks after sending the request to a server or while connecting to server , How can we handle the code at our end in case of NSURLConnection and NSURLSessionDataTask?-My idea is to use Reachability classes and check when it becomes online.
The data we are sending got updated at the server side. But we don't get the response from server. What can we do at our side to handle this situation?- Incrementing timeOutInterval is the only thing that we can do?
Please help me with these scenarios. Thank you very much in advance!!
That's multiple questions, really, but I'll try to answer them all briefly.
Most failure handling is the same between NSURLConnection and NSURLSession. The main advantages of the latter are support for background downloads and cancelling groups of related requests.
That said, if you're doing a large download that you think might fail, NSURLSession does provide download tasks that let you resume the download if your network connection fails, similar to what NSURLDownload used to do on OS X (never available on iOS). This only helps for downloading large files, though, not for large uploads (which require significant server-side support to resume) or other requests.
Your intuition is correct. When a connection fails, create a reachability object monitoring that particular hostname to see when it would be a good time to try the request again. Then, try the request again.
You might also display some sort of advisory UI to say that you have no Internet connection. (By advisory, I mean something that the user doesn't have to click on and that does not impact offline use of the app any more than necessary; look at the Facebook app for a great example.)
Provide a unique identifier when you make the request, and store that on the server along with the server's response until the client acknowledges receipt of the response (or purge it anyway after some reasonable number of days). When the upload finishes, the server gives you back its response if it can.
If something goes wrong, the client asks the server to resend the response associated with that unique identifier. Once your client has the data, it acknowledges receipt and the server deletes the response. If you ask the server for the response and it doesn't have one, then the upload didn't really complete.
With some additional work, this approach can make it possible to support long-running uploads more reliably. If an upload fails, ask the server how much data it got for that identifier, then tell the server that you're going to upload new data starting at the next byte. On the server side, overwrite the old data starting at that byte (just in case some data was still being written when you asked for the length).
Hope that helps.

Rails 3: Return large amount of data to user via API

My app has an API that users can request data. Sometimes that data takes time to process and is breaking my code.
I need a solution for this and I was thinking in using delayed_job but I'm not sure how this works. If the user makes a request, I need to give him an answer. Even if I process the data in background, the call still needs to wait until the job returns.
What is the solution for this? I am not sure how to do it.
Thanks
Heroku has a 30 second timeout, which is why your requests are failing (Probably H12 or H13 in your heroku logs).
There are three methods to work around this.
Keep the connection open by sending blank data.
You'll need to respond within the first 30 seconds and every 55 seconds after that. Use the time in between to process the data. Sending spaces should not affect the ability of the browser to read the response.
Callback
Have the user provide a callback URL in the initial request. When you finish processing the data, hit the callback url with your response.
Polling
As suggested by Codeglot, you can provide the user with a key. To check on their request, they can ping your server with that key.
Tell the user that their data is being processed and will be available shortly. Youtube, Vimeo, Facebook, Twitter, they all do this.

Implement long-polling API with Symfony

I am trying to implement an API which uses the long-polling concept in Symfony framework.
Let's say that I have a table 'feeds' which can only grow (assume that users can insert thier feed from other interface).
I want to create a client-side real-time updated page. The idea is the following:
Client send an ajax request with timestamp of last modification (first time sends 0)
Server compares timestamp of client to timestamp, to retrieve all messages with bigger timestamp than the one sent by user
If there are newer messages, return them immediately to the client, with the timestamp of the latest one
On other hand, if there are no new messages, enter into a 2 minutes busy-wait loop, checking every 1-3 seconds (randomly) whether there are new messages.
When client receive servers answer, browser updates view and immediately sends a new ajax request.
In other words, instead of send an AJAX call every x seconds, the server holds the request till it has new information for us.
Having good experience with Symfony I tried to implement a simple demo of this api, and it works great. I had a problem of session blocking (the ajax call is held so access to the server is not possible), so I simply added the following to the action:
public function executeIndex(sfWebRequest $request)
{
session_write_close();
:
:
(see also this link)
Then I testes massive access to the API. 100 users works fine, 1000 everything crashes.
I realized that I have two problems:
For each access a new DB connection is opened
For each access the server executes a new process
For the first problem I tried to put persistent: true In my database.yml Doctrine connetor. When I monitored the server connections I saw that still each access to the API opens a new connection. So basically I am still blocked with the same two problems.
Does anyone have any idea or experience with this issue?? Or maybe I should give-up the idea of implementing my api with Symfony??
I think using symfony for this, is the wrong approach. Using Sockets would be much easier.
For example have a look at nodejs or ape-project (comet)
they both are able to handle much more current users than apache, lighttpd or nginx...
Apache creating different threads for each user and each thread have a separate database connection. that's why the db connection are high

Streaming API vs Rest API?

The canonical example here is Twitter's API. I understand conceptually how the REST API works, essentially its just a query to their server for your particular request in which you then receive a response (JSON, XML, etc), great.
However I'm not exactly sure how a streaming API works behind the scenes. I understand how to consume it. For example with Twitter listen for a response. From the response listen for data and in which the tweets come in chunks. Build up the chunks in a string buffer and wait for a line feed which signifies end of Tweet. But what are they doing to make this work?
Let's say I had a bunch of data and I wanted to setup a streaming API locally for other people on the net to consume (just like Twitter). How is this done, what technologies? Is this something Node JS could handle? I'm just trying to wrap my head around what they are doing to make this thing work.
Twitter's stream API is that it's essentially a long-running request that's left open, data is pushed into it as and when it becomes available.
The repercussion of that is that the server will have to be able to deal with lots of concurrent open HTTP connections (one per client). A lot of existing servers don't manage that well, for example Java servlet engines assign one Thread per request which can (a) get quite expensive and (b) quickly hits the normal max-threads setting and prevents subsequent connections.
As you guessed the Node.js model fits the idea of a streaming connection much better than say a servlet model does. Both requests and responses are exposed as streams in Node.js, but don't occupy an entire thread or process, which means that you could continue pushing data into the stream for as long as it remained open without tying up excessive resources (although this is subjective). In theory you could have a lot of concurrent open responses connected to a single process and only write to each one when necessary.
If you haven't looked at it already the HTTP docs for Node.js might be useful.
I'd also take a look at technoweenie's Twitter client to see what the consumer end of that API looks like with Node.js, the stream() function in particular.