is there a way to sent the browser, that he should cache the response/body (json) for minutes/hours/days?
I want to reduce server request and one a specific route where the content change very rarely (probably once a week). This will reduce the traffic on the client side aswell.
i've tried with:
res.set('Cache-Control', 'public, max-age=6000');
res.set('Expires', new Date(Date.now() + 60000).toUTCString());
res.set('Prgama', 'cache');
but chrome ignores that, but maybe that's wrong. Im clueless and google didn't help yet.
Final result should be like (Chrome Network Tab):
First Client request: Status Code 200 OK
Second Client request: STatus Code OK (from disk cache)
etc.
After the the time expires, again the Status Code 200 OK (WITHOUT from disk cache) like static files (images) works.
I only find server side caching, but this won't reduce GET requests to the Backend.
Related
I'm attempting to post some json to an express.js endpoint. If the size of the json is less than 64k, then it succeeds just fine. If it exceeds 64k, the request is never completely received by the server. The problem only occurs when running express directly locally. When running on heroku, the request proceeds without issue.
The problem is seen across MacOS, Linux (ubuntu 19), and Windows. It is present when using Chrome, Firefox, or Safari.
When I make requests using postman, the request fails.
If I make the request using curl, the request succeeds.
If I make the request after artificially throttling chrome to "slow 3G" levels in network settings, the request succeeds.
I've traced through express and discovered that the problem appears when attempting to parse the body. The request gets passed to body-parser.json() which in turns called getRawBody to get the Buffer from the request.
getRawBody is processing the incoming request stream and converting it into a buffer. It receives the first chunk of the request just fine, but never receives the second chunk. Eventually the request continues parsing with an empty buffer.
The size limit on bodyparser is set to 100mb, so it is not the problem. getRawBody never returns, so body-parser never gets a crack at it.
If I'm logging the events from getRawBody I can see the first chunk come in, but no other events are fired.
Watching wireshark logs, all the data is getting sent over the wire. But it looks like for some reason, express is not receiving all the chunks. I think it's got to be due to how express is processing the packets, have no idea how to proceed.
In the off chance anyone in the future is running into the same thing: The root problem in this case was that we were overwriting req.socket with our socket.io client. req.socket is used by node internally to transfer data. We were overwriting such that the first packets would get through, but not subsequent packets. So if the request was processed sufficiently quickly, all was well.
tl;dr: Don't overwrite req.socket.
I have app that generates presigned url (using java sdk generatePresignedUrl method).
Everything works on one environment (# EU_central_1 server), but the same app published on other environment (client's EU_West_1) generates links that don't work, info from S3 when i try to download object right after creating an URL:
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<X-Amz-Expires>600</X-Amz-Expires>
<Expires>2016-05-26T09:32:44Z</Expires>
<ServerTime>2016-05-26T09:33:03Z</ServerTime>
As you can see, x-amz-expires was set to 600 seconds, but expires tag says object was expired immediately.
Is it a problem with
GeneratePresignedUrlRequest.setExpiration that calculates incorrect expiration time ?
that's my code to set expiration time:
Date expiration = new Date();
expiration.setTime(expiration.getTime() + 1000 * 600);
GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(bucketName, key);
generatePresignedUrlRequest.setMethod(HttpMethod.GET);
generatePresignedUrlRequest.setExpiration(expiration);
URL url = s3client.generatePresignedUrl(generatePresignedUrlRequest);
Looks like both servers return the same time. that's the response from two different EC2 servers connected to two different S3 servers, from the same area. One has a expire set to 4, second to 4000 (to be able to download resource right after creating a link).
Response from server working correctly:
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<X-Amz-Expires>4</X-Amz-Expires>
<Expires>2016-05-31T09:54:04Z</Expires>
<ServerTime>2016-05-31T11:00:17Z</ServerTime>
response from server with presigned URL problem:
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<X-Amz-Expires>4000</X-Amz-Expires>
<Expires>2016-05-31T10:49:54Z</Expires>
<ServerTime>2016-05-31T11:00:07Z</ServerTime>
both links created in the same time (with a few seconds difference for page refresh)
Signature V4 (unlike V2) does not rely on the signature generation code to do the time math to figure out the expiration time.
Generating a V4 signature (as you are doing) requires that you know what time it is now, and include that value as X-Amz-Date. AWS then does the math on their side. "Hey, this guy says he signed it 11 minutes ago, and it's only good for 10 minutes... denied!"
Check the clock on the machine generating the signature.
Please refer the below article to sync your time( between ec2 and s3)
https://aws.amazon.com/blogs/aws/keeping-time-with-amazon-time-sync-service/
we need to use a service called chrony.
its a versatile implementation on ntp and a bit more accurate and accomodate leap seconds.
Use below information to troubleshoot:
you can know the time in current linux machine using date command.
X-Amz-Date tells you when your url was signed in your ec2 or machine where the code is running.
In the response,
<Expires>2016-05-31T10:49:54Z</Expires>
<ServerTime>2016-05-31T11:00:07Z</ServerTime>
Expires tell when the signature expired based on the X-Amz-Expires and ServerTime tells the time on the s3 server when the request was received.
I need a request to reload the server and use the updated source without the user having to intervene.
As it stands, I ping the server with a request to update source using git. I reload Apache to flush INC/conf files (I'm aware the current request hasn't been flushed). To prevent the user from having to interact, I return a silent JSON to the client with details needed to continue. The client script then POSTs back to the server. Problem is, the second request is run with the previous source. Shouldn't it be a new request of the updated parent process?
What am I missing? Thanks.
The webserver is most likely asking the client to cache by setting http headers in its response
Config
We have a play 2.1.0 with angularjs setup in a production mode.
We have reverse proxy load balancer setup with apache 2.2 something like mentioned in here
http://www.playframework.com/documentation/2.1.0/HTTPServer
This whole app is running in an iframe inside navigated from a jboss application.
Problem
Most of the time it works and sometimes when the connection is left idle for 2/3 hours, untouched, no one hit the reverse proxy url to load the jboss/play, then we are getting the 502 proxy error in the iframe content after a few mins wait.
Play receives the request, but somehow decides not to respond at all. This occurs only for the first time or couple of time after the wakeup. Then when we refresh the page play receives the request and responds it properly.
Tried
We get a tcpdump on the play port and it we have got all the requests being received, but no response sent from play for the failed scenario. Whereas the same request got responded by play subsequent times.
X-Forwarded-For: ,X-Forwarded-Host: X-Forwarded-Server: .. Connection: Keep-Alive - all these headers are being sent in the lost response tcpdump.
Tried KeepAlive, with timeouts in the proxy server, not much help. Why the play didn't respond for the initial connections after idle state, is there any conf we can set to keep it alive?
Workaround
Polling the play server url constantly every half an hour from the same server makes this issue not reproducible.
Still any help/suggestions would be really appreciated to fix this issue..
I tried to solve this problem myself. Approaches like the answers mentioned here and here did not change anything.
I then decided to go for nginx again which I have been using with Play applications before. The setup is to be found here. Since then the problem is gone.
I'm using SharpBITS to download file from AmazonS3.
> // Create new download job. BitsJob
> job = this._bitsManager.CreateJob(jobName, JobType.Download);
> // Add file to job.
> job.AddFile(downloadFile.RemoteUrl, downloadFile.LocalDestination);
> // Resume
> job.Resume();
It works for files which do no need authentication. However as soon as I add authentication query string for AmazonS3 file request the response from server is http state 403 -unauthorized. Url works file in browser.
Here is the HTTP request from BIT service:
HEAD /mybucket/6a66aeba-0acf-11df-aff6-7d44dc82f95a-000001/5809b987-0f65-11df-9942-f2c504c2c389/v10/summary.doc?AWSAccessKeyId=AAAAZ5SQ76RPQQAAAAA&Expires=1265489615&Signature=VboaRsOCMWWO7VparK3Z0SWE%2FiQ%3D HTTP/1.1
Accept: */*
Accept-Encoding: identity
User-Agent: Microsoft BITS/7.5
Connection: Keep-Alive
Host: s3.amazonaws.com
The only difference between the one from a web browser is the request type. Firefox makes a GET request and BITS makes a HEAD request. Are there any issues with Amazon S3 HEAD requests and query string authentication?
Regards, Blaz
You are probably right that a proxy is the only way around this. BITS uses the HEAD request to get a content length and decide whether or not it wants to chunk the file download. It then does the GET request to actually retrieve the file - sometimes as a whole if the file is small enough, otherwise with range headers.
If you can use a proxy or some other trick to give it any kind of response to the HEAD request, it should get unstuck. Even if the HEAD request is faked with a fictitious content length, BITS will move on to a GET. You may see duplicate GET requests in a case like this, because if the first GET request returns a content length longer than the original HEAD request, BITS may decide "oh crap, I better chunk this after all."
Given that, I'm kind of surprised it's not smart enough to recover from a 403 error on the HEAD request and still move on to the GET. What is the actual behaviour of the job? Have you tried watching it with bitsadmin /monitor? If the job is sitting in a transient error state, it may do that for around 20 mins and then ultimately recover.
Before beginning a download, BITS sends an HTTP HEAD request to the server in order to figure out the remote file's size, timestamp, etc. This is especially important for BranchCache-based BITS transfers and is the reason why server-side HTTP HEAD support is listed as an HTTP requirement for BITS downloads.
That being said, BITS bypasses the HTTP HEAD request phase, issuing an HTTP GET request right away, if either of the following conditions is true:
The BITS job is configured with the BITS_JOB_PROPERTY_DYNAMIC_CONTENT flag.
BranchCache is disabled AND the BITS job contains a single file.
Workaround (1) is the most appropriate, since it doesn't affect other BITS transfers in the system.
For workaround (2), BranchCache can be disabled through BITS' DisableBranchCache group policy. You'll need to do "gpupdate" from an elevated command prompt after making any Group Policy changes, or it will take ~90 minutes for the changes to take effect.