Fine Uploader getting "Policy expired" message sending to S3 for some - amazon-s3

I recently implemented Fine Uploader and it's been mostly successful. A few users are however are not able to upload. They are all using modern browsers (IE10, FF and Chrome). One let me remotely access their machine and I was able to try it on both Chrome and FF.
I got the same error on both:
[10:45:28.330] "[FineUploader 3.8.0] Received response status 403 with body: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Invalid according to Policy: Policy expired.</Message><RequestId>--removed--</RequestId><HostId>--removed--</HostId></Error>"
Is it something with the timezone settings on their computer where it's generating an invalid policy?

The timezone settings will have no effect as times are UTC. However, if the time on the user's computer is not accurate (say, off by 5 or more minutes), then the policy will be expired, according to Amazon.
Fine Uploader sets an expiration date to 5 minutes (again, in UTC). The date used is generated in the browser, so your client machine's time will be used. If the client machine's clock is slow by 5 or more minutes, the policy will be seen as expired when Amazon handles it.
I'm fairly sure that the issue is due to a significant drift on your customer's machine clock. If you verify this, I suggest you instruct them to keep system clock synced with a time server.
Update: A new feature was added to Fine Uploader 5.5 that allows you to overcome extreme clock drift on user machines/browsers. See the clock drift section on the S3 feature page for more information.

Related

S3 presigned download url immediately expired, why?

I have app that generates presigned url (using java sdk generatePresignedUrl method).
Everything works on one environment (# EU_central_1 server), but the same app published on other environment (client's EU_West_1) generates links that don't work, info from S3 when i try to download object right after creating an URL:
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<X-Amz-Expires>600</X-Amz-Expires>
<Expires>2016-05-26T09:32:44Z</Expires>
<ServerTime>2016-05-26T09:33:03Z</ServerTime>
As you can see, x-amz-expires was set to 600 seconds, but expires tag says object was expired immediately.
Is it a problem with
GeneratePresignedUrlRequest.setExpiration that calculates incorrect expiration time ?
that's my code to set expiration time:
Date expiration = new Date();
expiration.setTime(expiration.getTime() + 1000 * 600);
GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(bucketName, key);
generatePresignedUrlRequest.setMethod(HttpMethod.GET);
generatePresignedUrlRequest.setExpiration(expiration);
URL url = s3client.generatePresignedUrl(generatePresignedUrlRequest);
Looks like both servers return the same time. that's the response from two different EC2 servers connected to two different S3 servers, from the same area. One has a expire set to 4, second to 4000 (to be able to download resource right after creating a link).
Response from server working correctly:
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<X-Amz-Expires>4</X-Amz-Expires>
<Expires>2016-05-31T09:54:04Z</Expires>
<ServerTime>2016-05-31T11:00:17Z</ServerTime>
response from server with presigned URL problem:
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<X-Amz-Expires>4000</X-Amz-Expires>
<Expires>2016-05-31T10:49:54Z</Expires>
<ServerTime>2016-05-31T11:00:07Z</ServerTime>
both links created in the same time (with a few seconds difference for page refresh)
Signature V4 (unlike V2) does not rely on the signature generation code to do the time math to figure out the expiration time.
Generating a V4 signature (as you are doing) requires that you know what time it is now, and include that value as X-Amz-Date. AWS then does the math on their side. "Hey, this guy says he signed it 11 minutes ago, and it's only good for 10 minutes... denied!"
Check the clock on the machine generating the signature.
Please refer the below article to sync your time( between ec2 and s3)
https://aws.amazon.com/blogs/aws/keeping-time-with-amazon-time-sync-service/
we need to use a service called chrony.
its a versatile implementation on ntp and a bit more accurate and accomodate leap seconds.
Use below information to troubleshoot:
you can know the time in current linux machine using date command.
X-Amz-Date tells you when your url was signed in your ec2 or machine where the code is running.
In the response,
<Expires>2016-05-31T10:49:54Z</Expires>
<ServerTime>2016-05-31T11:00:07Z</ServerTime>
Expires tell when the signature expired based on the X-Amz-Expires and ServerTime tells the time on the s3 server when the request was received.

Issue when playing DashCast live stream

I'm trying to capture desktop and stream it live in an Apache server using DashCast. It captures and plays correctly when I do it on demand, however when I do it live and then play with MP4Client it shows only a black screen, not even getting any error message while capturing it. The commands I’m using are:
DashCast -vf x11grab -vres 1280x720 -v :0.0 -npts -live -out /public_html/
And then I play with:
MP4Client http://localhost/vitor/dashcast.mpd
Which results in the following output:
MP4Client http://localhost/vitor/dashcast/dashcast.mpd
Using config file in /home/vitor directory
System info: 11948 MB RAM - 8 cores
Modules Found : 36
Loading GPAC Terminal
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
Terminal Loaded in 35 ms
Opening URL localhost/vitor/dashcast/dashcast.mpd
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
Service Connected
So what am I doing wrong? The client apparently connects correctly to the server, open the player but then it doesn't show anything on screen. I'm using Ubuntu 14.04 with GPAC version 0.5.0.
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
This message indicates that there is a difference ('slight' is a wrong word here given the actual difference !) between the UTC time indicated in the MPD in the availabilityStartTime attribute and the current time that MP4Client uses to compute which segment to fetch. This is only relevant for live because for on demand, all segments are assumed to be available all the time.
MP4Client uses different strategies to determine the 'current' time. The system time on the client may be different from the system time on the server, if they are using different NTP servers for instance. System time is not reliable. So MP4Client tries to get the time from the server. It first tries to use a specific HTTP "Server-UTC" header that the server may set. See for example this code. If this header is not set, it looks at the HTTP "Date" Header, even if it's not very precise. In your case, your HTTP server probably has a time configuration that does not match the system time. You can tell MP4Client to stop using the server information and to rely on its system time. Since you are using client and server on the same machine, that should work. The documentation of that option is here. For that, use:
MP4Client http://localhost/file.mpd -opt DASH:UseServerUTC=no
Alternatively, you can try to play the MPD locally without going through the web server.
MP4Client file.mpd
If that is not working, open an issue on GPAC's GitHub providing as much information as possible, in particular the result of MP4Box -version.

Twitter APIs - Twitter4j - sync issue?

I am using Twitter4J to retrieve user timelines, but it stopped working. The number of accepted requests is fine, but I get a autentication problem, probably related to clock sync?
INFO: Error while querying Twitter: 401:Authentication credentials (https://dev.twitter.com/pages/auth) were missing or incorrect. Ensure that you have set valid consumer key/secret, access token/secret, and the system clock is in sync.
{"request":"/1.1/statuses/user_timeline.json","error":"Not authorized."}
401:Authentication credentials (https://dev.twitter.com/pages/auth) were missing or incorrect. Ensure that you have set valid consumer key/secret, access token/secret, and the system clock is in sync.
{"request":"/1.1/statuses/user_timeline.json","error":"Not authorized."}
rateLimitStatus=RateLimitStatusJSONImpl{remaining=178, limit=180, resetTimeInSeconds=1432305852, secondsUntilReset=899}, version=3.0.5}
Not sure what to do then. ive tried already to sync my server with ntpdate ntp.ubuntu.com with no luck.
I think you are using SandBox(Build-in VM) of Cloudera/Hortonworks etc
I was also getting the same problem and was trying to sync my clock with 'time.windows.com' clock but I was failed to do. So I moved to 4 nodes cluster which was already existing in my case and there my clock was in sync and I could run my request to Twitter successfully.
Conclusion: Move from Cloudera/Hortonworks VM to own installed OS and make the clock sync.
Hope this help!!!

The request failed with HTTP status 417: Expectation failed

Without getting into much detailed code
I have an 'kiosk' application that is running in about 500-800 different 'kiosk' at about 50 locations. Very simple application that connects to internet via a Verizon MIFI (2-3 MIFI per location). We believe that Verizon has made some changes to the network and now randomly I get
The request failed with HTTP status 417: Expectation failed
I have viewed The request failed with HTTP status 417: Expectation Failed - Using Web Services
and FB Connect: (417) Expectation failed
But you see I already had used
System.Net.ServicePointManager.Expect100Continue = false
in my code.
So one of the issues I have is the application isn't easy to test, and it will fail for 20-30 minutes or several days, then clears itself up.
Changing the config to include
<system.net>
<settings>
<servicePointManager expect100Continue="false" />
</settings>
Would be a large task, I don't know it that would even fix it. Since it is random I'm having troubles because I typically can't get it to fail in my office at my desk more than 1 time.
I happen to use VB and .Net for the application and services that run with the 'kiosk'.
The issue seems to be with the config on the mifi and not the Verizon network itself. We recently switched APNs and when a mifi connects to the Verizon network it is supposed to update automatically. Sometimes the mifi will fail to update the APN setting and that is when we get this error message. There are two ways I have found to fix this issue. The first and easier is to log into the mifi and manually update the setting. If you are dealing with a user who is not tech savvy and walking them through logging into the mifi will not work you can call the Verizon wireless enterprise help desk and have them remove the feature set from the mifi, add the features back, and then pull the battery from the mifi and power cycle it, this will make the mifi request the configuration settings again.

RUN#Cloud consistently throws me out during a heavy operation

I'm using a large app instance to run a basic java web application (GWT + Spring). There's an expensive operation within my application (report) which takes a long time to execute.
I've tried running it with the cloudbees SDK on my local machine with similar settings as it would be on the cloud and it seems to function just fine. It runs in about 3-4 minutes.
On the cloud, it seems to be taking longer. The problem isn't the fact that it takes long. What happens in that cloudbees terminates the session after 5 minutes and gives me an error in my browser saying 'Unable to connect to server. Please contact your administrator'. A report which doesn't take as long runs just fine. My application has a session timeout of 30 minutes, so that isn't a problem either.
What could possibly be going wrong? Is it something to do with cloudbees?
This may be due to proxy buffering of your request through the routing layer (revproxy) - so it most likely isn't a session timeout - but the http connection getting cut.
You can either set proxyBuffering=false via the bees CLI command (eg when you deploy the app) - this will ensure longer running connections can work.
Ideally, however, you could change the app slightly to return to the browser with some token which you can poll with to get completion status, as even with a connection that lasts that long, over the internet it may provide a bad experience vs locally.