Does gcloud-node have a way to set the http timeout for uploading files to storage? - gcloud-node

I'm uploading small files (via Bucket.upload), and occasionally I get a 503 or 500 from the google backend, but it usually returns that after ~5 or ~10 seconds (I assume it's the timeout on google's end). I noticed in gcloud-node util it sets the request timeout to 60 seconds, but I don't see to be able to find a way to set the timeout myself.
Thanks!

On my phone while playing with Mickey Mouse figurines with my daughter, so my answer can't be too long. Check out Request interceptors: https://googlecloudplatform.github.io/gcloud-node/#/docs/v0.28.0/gcloud
Hopefully that helps!

Related

How can I limit the number of requests my Ktor Clientside can make to an endpoint?

I am using the Notion API and weirdly enough, i never recieve the error code for exceeding the rate limit, my request just get timeouted out, even with 3 retries spaced by 15 seconds each but I know I am exceeding the number of 3 request per seconds.
I found some thing in the documentation of ktor client side but I didn't understand how it works
Documentation
I didn't understand what I tried, it made requests slower but not enough to be below 333ms
I found some github things but it was made for Ktor server and I am using ktor client

How long is the Error 429 TooManyRequests cooldown?

I accidentally caused my code to send over 4500 requests to the News API and now I am locked out of the API response.
The News API website just tells to back off for a while in their Error documentation, but there is no actual waiting time mentioned. I've already waited over 3 hours.
Is the cooldown specific to the server or are these things universally same?
Do I need to wait to tomorrow (24 hours) ?
There was no Retry-After header as far as I can tell.
OK, I got reply from the API support. 500 requests / 12h => 1k/day.

Laravel Api response

I am new to Laravel and Api development, i am facing a problem, the workflow of my api is, a user sends post data to api, then api takes that data and processes the data to databases, now there is a process in which php waits for 30 min. while inserting data into two different tables.
The problem is as far as i know after that process is complete then only i can send json response back to user. but this way user has to wait for 30 minute.
Is there a way that process that takes 30 min do work in background and send the response json immediately when that process started ?
1) I studied about queues but the web server i will be hosting will not give me access to server as a whole to install something, it will only give me space for my files.
I am confused how to achieve this functionality, so that user do not have to wait much for Response.
I will really appreciate.
Thanks,
You can use the queue without any server installation. All your configuration goes in the config/queue.php file.
You can use
Amazon SQS: aws/aws-sdk-php ~3.0
Beanstalkd: pda/pheanstalk ~3.0
Redis: predis/predis ~1.0
Read more here: https://laravel.com/docs/5.2/queues#introduction

Understanding fiddler statistics

We are sending a HTTP WCF request to a 3rd party system hosted on our servers and were experiencing a significant delay between sending the request and getting the response. The 3rd party are claiming that they complete their work in a few seconds but in fiddler I can see a significant gap between the ServerBeginResponse and the GotResponseHeaders.
Now I'm not sure what could account for this delay? Could someone explain what the ServerBeginResponseand the GotResponseHeaders timers in Fiddler actually mean?
The timers mean pretty much exactly what they say-- The ServerGotRequest timer is set when Fiddler is done transmitting the HTTP request to the server. The GotResponseHeaders timer is set when Fiddler has read the complete set of response headers from the server.
In your screenshot, there's a huge delay between ServerBeginResponse (which is set when the first byte of the server's response is returned) and GotResponseHeaders which suggests that the server spent a significant amount of time in completing the return of the HTTP response headers.
If you send me (via Help > Send Feedback) a SAZ capture of this traffic, I can take a closer look at it.

Heroku: I have a request that takes more than 30 seconds and it breaks

I have a request that takes more than 30 seconds and it breaks.
What is the solution for this? I am not sure if I add more dynos this will work.
Thanks
You should probably see the Heroku devcenter article regarding this, as the information will be more helpful, here's a small summary:
To answer the timeout question:
Cedar supports long-polling and streaming responses. Your app has an initial 30 second window to respond with a single byte back to the client. After each byte sent (either recieved from the client or sent by your application) you reset a rolling 55 second window. If no data is sent during the 55 second window your connection will be terminated.
(That is, if you had Cedar instead of Aspen or Bamboo you could send a byte every thirty seconds or so just to trick the system. It might work.)
To answer your dynos question:
Additional concurrency is of no help whatsoever if you are encountering request timeouts. You can crank your dynos to the maximum and you'll still get a request timeout, since it is a single request that is failing to serve in the correct amount of time. Extra dynos increase your concurrency, not the speed of your requests.
(That is, don't bother adding more dynos.)
On request timeouts:
Check your code for infinite loops, if you're doing something big:
If so, you should move this heavy lifting into a background job which can run asynchronously from your web request. See Queueing for details.