S3 presigned download url immediately expired, why? - amazon-s3

I have app that generates presigned url (using java sdk generatePresignedUrl method).
Everything works on one environment (# EU_central_1 server), but the same app published on other environment (client's EU_West_1) generates links that don't work, info from S3 when i try to download object right after creating an URL:
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<X-Amz-Expires>600</X-Amz-Expires>
<Expires>2016-05-26T09:32:44Z</Expires>
<ServerTime>2016-05-26T09:33:03Z</ServerTime>
As you can see, x-amz-expires was set to 600 seconds, but expires tag says object was expired immediately.
Is it a problem with
GeneratePresignedUrlRequest.setExpiration that calculates incorrect expiration time ?
that's my code to set expiration time:
Date expiration = new Date();
expiration.setTime(expiration.getTime() + 1000 * 600);
GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(bucketName, key);
generatePresignedUrlRequest.setMethod(HttpMethod.GET);
generatePresignedUrlRequest.setExpiration(expiration);
URL url = s3client.generatePresignedUrl(generatePresignedUrlRequest);
Looks like both servers return the same time. that's the response from two different EC2 servers connected to two different S3 servers, from the same area. One has a expire set to 4, second to 4000 (to be able to download resource right after creating a link).
Response from server working correctly:
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<X-Amz-Expires>4</X-Amz-Expires>
<Expires>2016-05-31T09:54:04Z</Expires>
<ServerTime>2016-05-31T11:00:17Z</ServerTime>
response from server with presigned URL problem:
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<X-Amz-Expires>4000</X-Amz-Expires>
<Expires>2016-05-31T10:49:54Z</Expires>
<ServerTime>2016-05-31T11:00:07Z</ServerTime>
both links created in the same time (with a few seconds difference for page refresh)

Signature V4 (unlike V2) does not rely on the signature generation code to do the time math to figure out the expiration time.
Generating a V4 signature (as you are doing) requires that you know what time it is now, and include that value as X-Amz-Date. AWS then does the math on their side. "Hey, this guy says he signed it 11 minutes ago, and it's only good for 10 minutes... denied!"
Check the clock on the machine generating the signature.

Please refer the below article to sync your time( between ec2 and s3)
https://aws.amazon.com/blogs/aws/keeping-time-with-amazon-time-sync-service/
we need to use a service called chrony.
its a versatile implementation on ntp and a bit more accurate and accomodate leap seconds.
Use below information to troubleshoot:
you can know the time in current linux machine using date command.
X-Amz-Date tells you when your url was signed in your ec2 or machine where the code is running.
In the response,
<Expires>2016-05-31T10:49:54Z</Expires>
<ServerTime>2016-05-31T11:00:07Z</ServerTime>
Expires tell when the signature expired based on the X-Amz-Expires and ServerTime tells the time on the s3 server when the request was received.

Related

Cache an Express REST route on browser side

is there a way to sent the browser, that he should cache the response/body (json) for minutes/hours/days?
I want to reduce server request and one a specific route where the content change very rarely (probably once a week). This will reduce the traffic on the client side aswell.
i've tried with:
res.set('Cache-Control', 'public, max-age=6000');
res.set('Expires', new Date(Date.now() + 60000).toUTCString());
res.set('Prgama', 'cache');
but chrome ignores that, but maybe that's wrong. Im clueless and google didn't help yet.
Final result should be like (Chrome Network Tab):
First Client request: Status Code 200 OK
Second Client request: STatus Code OK (from disk cache)
etc.
After the the time expires, again the Status Code 200 OK (WITHOUT from disk cache) like static files (images) works.
I only find server side caching, but this won't reduce GET requests to the Backend.

Issue when playing DashCast live stream

I'm trying to capture desktop and stream it live in an Apache server using DashCast. It captures and plays correctly when I do it on demand, however when I do it live and then play with MP4Client it shows only a black screen, not even getting any error message while capturing it. The commands I’m using are:
DashCast -vf x11grab -vres 1280x720 -v :0.0 -npts -live -out /public_html/
And then I play with:
MP4Client http://localhost/vitor/dashcast.mpd
Which results in the following output:
MP4Client http://localhost/vitor/dashcast/dashcast.mpd
Using config file in /home/vitor directory
System info: 11948 MB RAM - 8 cores
Modules Found : 36
Loading GPAC Terminal
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
Terminal Loaded in 35 ms
Opening URL localhost/vitor/dashcast/dashcast.mpd
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
Service Connected
So what am I doing wrong? The client apparently connects correctly to the server, open the player but then it doesn't show anything on screen. I'm using Ubuntu 14.04 with GPAC version 0.5.0.
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
This message indicates that there is a difference ('slight' is a wrong word here given the actual difference !) between the UTC time indicated in the MPD in the availabilityStartTime attribute and the current time that MP4Client uses to compute which segment to fetch. This is only relevant for live because for on demand, all segments are assumed to be available all the time.
MP4Client uses different strategies to determine the 'current' time. The system time on the client may be different from the system time on the server, if they are using different NTP servers for instance. System time is not reliable. So MP4Client tries to get the time from the server. It first tries to use a specific HTTP "Server-UTC" header that the server may set. See for example this code. If this header is not set, it looks at the HTTP "Date" Header, even if it's not very precise. In your case, your HTTP server probably has a time configuration that does not match the system time. You can tell MP4Client to stop using the server information and to rely on its system time. Since you are using client and server on the same machine, that should work. The documentation of that option is here. For that, use:
MP4Client http://localhost/file.mpd -opt DASH:UseServerUTC=no
Alternatively, you can try to play the MPD locally without going through the web server.
MP4Client file.mpd
If that is not working, open an issue on GPAC's GitHub providing as much information as possible, in particular the result of MP4Box -version.

Difference in response time between http vs https

I tested my web site with 100 users with http and https. The response time obtained in https is much higher compared to the response time obtained in http. The response time of https is nearly four times greater than http. Can anyone explain me why the response time is higher in https compared to http? or do i need to change any SSL property in jmeter system.properties? Thanks in Advance..!
SSL Handshake assumes 4 requests for establishing a connection so first request should be something like 4x times longer than in case of HTTP. See The SSL handshake diagram for more info
However if you receive 4 times performance degradation for all requests - that doesn't sound right.
There are following JMeter properties which control SSL flows:
https.sessioncontext.shared - controls whether SSL session contexts are created per thread (if it's set to false) or shared (if it's set to true)
https.use.cached.ssl.context - controls if cached SSL context is being reused between iterations
These properties live in jmeter.properties file under /bin folder of your JMeter installation. It's also possible to override them using -J command line key as follows:
jmeter -Jhttps.sessioncontext.shared=true -Jhttps.use.cached.ssl.context=true
See Apache JMeter Properties Customization Guide for more details.
If above setting won't help you'll need to review your test plan and perhaps profile application to see where this extra time is spent.

Fine Uploader getting "Policy expired" message sending to S3 for some

I recently implemented Fine Uploader and it's been mostly successful. A few users are however are not able to upload. They are all using modern browsers (IE10, FF and Chrome). One let me remotely access their machine and I was able to try it on both Chrome and FF.
I got the same error on both:
[10:45:28.330] "[FineUploader 3.8.0] Received response status 403 with body: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Invalid according to Policy: Policy expired.</Message><RequestId>--removed--</RequestId><HostId>--removed--</HostId></Error>"
Is it something with the timezone settings on their computer where it's generating an invalid policy?
The timezone settings will have no effect as times are UTC. However, if the time on the user's computer is not accurate (say, off by 5 or more minutes), then the policy will be expired, according to Amazon.
Fine Uploader sets an expiration date to 5 minutes (again, in UTC). The date used is generated in the browser, so your client machine's time will be used. If the client machine's clock is slow by 5 or more minutes, the policy will be seen as expired when Amazon handles it.
I'm fairly sure that the issue is due to a significant drift on your customer's machine clock. If you verify this, I suggest you instruct them to keep system clock synced with a time server.
Update: A new feature was added to Fine Uploader 5.5 that allows you to overcome extreme clock drift on user machines/browsers. See the clock drift section on the S3 feature page for more information.

Background Intelligent Transfer Service and Amazon S3

I'm using SharpBITS to download file from AmazonS3.
> // Create new download job. BitsJob
> job = this._bitsManager.CreateJob(jobName, JobType.Download);
> // Add file to job.
> job.AddFile(downloadFile.RemoteUrl, downloadFile.LocalDestination);
> // Resume
> job.Resume();
It works for files which do no need authentication. However as soon as I add authentication query string for AmazonS3 file request the response from server is http state 403 -unauthorized. Url works file in browser.
Here is the HTTP request from BIT service:
HEAD /mybucket/6a66aeba-0acf-11df-aff6-7d44dc82f95a-000001/5809b987-0f65-11df-9942-f2c504c2c389/v10/summary.doc?AWSAccessKeyId=AAAAZ5SQ76RPQQAAAAA&Expires=1265489615&Signature=VboaRsOCMWWO7VparK3Z0SWE%2FiQ%3D HTTP/1.1
Accept: */*
Accept-Encoding: identity
User-Agent: Microsoft BITS/7.5
Connection: Keep-Alive
Host: s3.amazonaws.com
The only difference between the one from a web browser is the request type. Firefox makes a GET request and BITS makes a HEAD request. Are there any issues with Amazon S3 HEAD requests and query string authentication?
Regards, Blaz
You are probably right that a proxy is the only way around this. BITS uses the HEAD request to get a content length and decide whether or not it wants to chunk the file download. It then does the GET request to actually retrieve the file - sometimes as a whole if the file is small enough, otherwise with range headers.
If you can use a proxy or some other trick to give it any kind of response to the HEAD request, it should get unstuck. Even if the HEAD request is faked with a fictitious content length, BITS will move on to a GET. You may see duplicate GET requests in a case like this, because if the first GET request returns a content length longer than the original HEAD request, BITS may decide "oh crap, I better chunk this after all."
Given that, I'm kind of surprised it's not smart enough to recover from a 403 error on the HEAD request and still move on to the GET. What is the actual behaviour of the job? Have you tried watching it with bitsadmin /monitor? If the job is sitting in a transient error state, it may do that for around 20 mins and then ultimately recover.
Before beginning a download, BITS sends an HTTP HEAD request to the server in order to figure out the remote file's size, timestamp, etc. This is especially important for BranchCache-based BITS transfers and is the reason why server-side HTTP HEAD support is listed as an HTTP requirement for BITS downloads.
That being said, BITS bypasses the HTTP HEAD request phase, issuing an HTTP GET request right away, if either of the following conditions is true:
The BITS job is configured with the BITS_JOB_PROPERTY_DYNAMIC_CONTENT flag.
BranchCache is disabled AND the BITS job contains a single file.
Workaround (1) is the most appropriate, since it doesn't affect other BITS transfers in the system.
For workaround (2), BranchCache can be disabled through BITS' DisableBranchCache group policy. You'll need to do "gpupdate" from an elevated command prompt after making any Group Policy changes, or it will take ~90 minutes for the changes to take effect.