S3 Error: The difference between the request time and the current time is too large - amazon-s3

I have error The difference between the request time and the current time is too large when call method amazons3.ListObjects
ListObjectsRequest request = new ListObjectsRequest() {
BucketName = BucketName, Prefix = fullKey
};
using (ListObjectsResponse response = s3Client.ListObjects(request))
{
bool result = response.S3Objects.Count > 0;
return result;
}
What it could be?

The time on your local box is out of sync with the current time. Sync up your system clock and the problem will go away.

For those using Vagrant, a vagrant halt followed by vagrant up worked for me.

The clock is out of sync.
I followed the steps in this post to get it working again, but also had to run the following command.
sudo ntpdate ntp.ubuntu.com
sudo apt-get install ntp
If at any time you get a message saying the NTP socket is still in use, stop it with sudo /etc/init.d/ntp stop and re-run your command.

I had the same error and I'm using Docker for Mac. Simply restarting Docker worked for me.

On WSL 2 or any Deb-based Linux (Ubuntu, Mint ...):
Check date:
date
Now run:
sudo apt install ntpdate
sudo ntpdate time.nist.gov
Output example:
18 Feb 14:27:36 ntpdate[24008]: step time server 132.163.97.4 offset 1009.140848 sec
Check date again:
date
Alternatively look for correctClockSkew option in AWS CLI/SDK config, and set it to true

For those using Docker in Windows try restarting the Docker Engine in Setting->Reset->Restart Docker.

In case anyone finds this using Laravel and Homestead, simply running
homestead halt
followed by
homestead up
And you're good to go again.

2021 answer:
AWS.config.update({
accessKeyId: 'xxx',
secretAccessKey: 'xxxx',
correctClockSkew: true
});

As other's have said, your local clock is out of sync with AWS. You can keep it synced to Amazon's servers directly using NTP so you won't have to worry about clock drift now or in the future.
Note: The below instructions are for *nix users. I've added a comment with how you might do it in Windows, but as a non-Windows user I can't verify their accuracy.
To install NTP, simply choose one of the following, depending on your distribution:
apt-get install ntp
or
yum install ntp
etc.
Configure NTP to use Amazon servers, like so:
vim /etc/ntp.conf
And in it, comment out the default servers and add these:
server 0.amazon.pool.ntp.org iburst
server 1.amazon.pool.ntp.org iburst
server 2.amazon.pool.ntp.org iburst
server 3.amazon.pool.ntp.org iburst
Restart ntp service:
sudo service ntp restart
Source: https://allcloud.io/blog/how-to-fix-amazon-s3-requesttimetooskewed/
And a more general article on keeping your time synchronized with NTP:
https://www.digitalocean.com/community/tutorials/how-to-set-up-time-synchronization-on-ubuntu-12-04

This can also be caused by using async/await with the construction of the request object outside the task and the actual call to AWS inside the task. If there are lots of tasks running and the task isn't scheduled in time, or there is some other operation delaying the actual call to AWS, this exception may be thrown. This is more common than you might guess because the default task scheduler does not process tasks in FIFO order, resulting in starvation for some tasks, especially under heavy load.

This reset my system clock correctly on OSX. S3 uploads using the JS SDK works for me now in local dev
ntpdate us.pool.ntp.org
Read more about this here

if this problem in you localhost for windows 10
set time automatically ON and set time zone automatically ON
this solve my problem.

If you get this error in windows follow these steps to solve your problem.. Change your local time setting:
step 1: click on change date and time settings
step 2: from the popup Date and Time window click on Internet Time Tab
step 3: next Click on Change Settings
step 4: from the Server drop down select time.nist.gov or check this website
step 5: click on OK
Restart your console and check. It works...

For those facing same problem on Microsoft WLS2 Ubuntu, the only workarounds right now are:
sudo hwclock -s
Or
wsl --shutdown
Clock offset is occurring after waking up Windows from sleep. Keep an eye on https://github.com/microsoft/WSL/issues/5324 for fix from microsoft.

If you're working with a VM, restarting the VM just works on mine

If you are using a virtualbox, the time into virtual machine is sync with the time of the real machine. Just fix the time into the virtual machine will not fix the problem.

I had this error because my local machine's time and timezone were set incorrectly. Changing them to the correct time and timezone worked for me.

I had same problem in Windows 10 with Docker. You should run this commands step for step
docker run --rm --privileged alpine hwclock -s
again
docker run --rm --privileged alpine hwclock -s
and last command , don't forget to set your username and password and your timezone, to run minIO while Docker is
docker run -p 9000:9000 -e "MINIO_ACCESS_KEY=yourUserName" -e "MINIO_SECRET_KEY=YourPassword" -e "TZ=Europe/Berlin" -v /etc/localtime:/etc/localtime:ro minio/minio server /data

It is a little crude but this worked for me
Did a curl to s3 server
curl s3.amazonaws.com -v
Then got this
* Trying 52.216.141.158...
* TCP_NODELAY set
* Connected to s3.amazonaws.com (52.216.141.158) port 80 (#0)
> GET / HTTP/1.1
> Host: s3.amazonaws.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 307 Temporary Redirect
< x-amz-id-2: q2wUOf5ZC7iu2ymbRWUpZaM6GpPLLf/irrntuw/JNB7QYxDzQvcLHQbsbF2dp5zT8rBrGwqnOz0=
< x-amz-request-id: T4H1W4WKBE3F39RM
< Date: Sat, 09 Oct 2021 19:21:24 GMT
< Location: https://aws.amazon.com/s3/
< Server: AmazonS3
< Content-Length: 0
<
* Connection #0 to host s3.amazonaws.com left intact
* Closing connection 0
Got this date
Sat, 09 Oct 2021 19:21:24 GMT
Set the date in ubuntu
sudo date --set "Sat, 09 Oct 2021 19:21:24 GMT"
My code stopped throwing exceptions
Now I have a script that does this periodically every month

To get rid of this problem, you have to adjust the client's timing so that there is a maximum time stamp difference of up to 15 minutes. Also set the standard time and zone for your system.
Check the full details here.

I have the exact same error message but it's not the same cause as any of the others above.
In my case I have a React browser app doing something like this:
import { Storage } from '#aws-amplify/storage'
...
await Promise.all(files.map(file => Storage.put(...)))
I am uploading a lot of files over a slow network connection.
With this code, the promises are all started at once, so the request time for all the requests is the same, but because the browser (or amplify?) is throttling the number of concurrent connections, the later requests don't actually hit the server until more than 15 minutes after they were created.
The solution is to limit the concurrency of the promise creation - e.g. use something like bluebird Promise.map with the concurrency option

Using ntp may not work on all version of your Linux based server (e.g. an out of date Ubuntu server version that is no longer supported which will block you from downloading ntp if it is not already installed).
If this is your situation, you can set independent time zones for your Linux VM:
https://community.rackspace.com/products/f/25/t/650
After you do this you may need to reset the time/date. Instructions for doing this are in this article:
http://codeghar.wordpress.com/2007/12/06/manage-time-in-ubuntu-through-command-line

If u are in 2016 and in Istanbul here is a weird situation that Turkey decided not to switch to winter time standards anyway set your local timezone to Moscow then restart your machine.

I ran into this issue running Jet (Codeship) and Terraform on MacOS using Docker for Mac Beta channel 1.13.1-beta42.
Failed to read state: Error reloading remote state: RequestTimeTooSkewed: The difference between the request time and the current time is too large.
status code: 403, request id: 9D32BA2A5360FC18
This was resolved by restarting Docker.

I've just started getting this error, and syncing my clock doesn't help. (I've spent 2 hours syncing it to every timeserver I can find, including the AWS servers, but nothing makes a difference.)
Exactly the same thing started happening a year ago on Dec 31 2017. In that case, rebooting my system, and rebuilding my server (that uses the aws java sdk) fixed it. I don't know why. I assumed that AWS had some end-of-year timezone peculiarity. It's also possible that while I was doing these things, AWS timeservers fixed themselves. I have no way to test that hypothesis.
Now, the same thing has suddenly started to happen on Dec 30, 2018. It's not right at year-end, but close enough to seem suspicious. (Never got this error except on these dates.) Rebooting and rebuilding isn't helping this time.
My dev environment on this box is Windows 10 under Parallels. Nothing else on my system has changed - as I've double-checked by rolling back to prior Parallels snapshots. The clocks on both my host MacOS and the virtual Windows 10 are correct.
I'm suspecting an AWS bug.

Rebooting my windows server fixed it for me
The time was identical to ~1 second to the site time.in, so it wasn't off.

I was running into the same issue on my Mac. When I moved to a different timezone(PST to IST), somehow OSX was not picking timezone and time change automatically. So I had to set the two manually and that caused a lag of some 15-20 seconds on my laptop. After setting the automatic sync, the time got synched and the S3 copy command started working: For reference

You can use this tool for organizing your time with AWS and local system.
To synchronize time:
sudo yum -y install chrony
sudo systemctl enable chronyd
sudo systemctl start chronyd

This issue generally occurs when s3cmd client machine time is not synced with server.
Check time of both machine.
either sync time between them using date command
Client# sudo date --set="string"
Client# sudo date --set="15 MAY 2011 1:40 PM"
or
install chrony and restart its service on both machine
Client# sudo apt-get install chrony
Client# vi /etc/chrony/chrony.conf
pool ntp-server iburst
Client# sudo systemctl restart chronyd

Related

Stop High Sierra's httpd/apache and use Homebrews?

I want to use Homebrew's version of Apache on my Mac so I can start/stop it as I please, so I've been trying all afternoon to stop and unload several httpd processes that are run by user _www on macOS 10.13.3 High Sierra without any luck.
See the screenshot attached, there are 6 httpd processes run by _www and the single process run under root (Homebrew service)
I've tried
sudo launchctl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist
which gives me an error:
/System/Library/LaunchDaemons/org.apache.httpd.plist: Could not find specified service
I've also tried tracking the process starting them down, switching user to _www (a no no) so I can see where it's being started and so far I'm having no luck.
I want to kill them all, and have them stay dead as right now they're conflicting with the server I'm actually trying to run. Anyone cleverer than me out there that knows how to kill this literal http demon?
https://stackoverflow.com/a/20439859/996338
Try this:
sudo launchctl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist
This will stop a running instance of Apache, and record that it should not be restarted. It records your preference in /private/var/db/launchd.db/com.apple.launchd/overrides.plist.
For a single session (meaning, between reboots), you can use sudo apachectl stop.

Facing frequent issues with Kurento media server one 2 one calling app

I am facing this frequently, and not able to figure out why, as sometimes it appears and sometimes it get resolved on its own. Can anyone please help?
Following is the error in the logs of Kurento-media-server.
Kurento-media-server-log
basically sometime I get ICE_GATHER_CANDIDATE error. I am using Ubuntu 16.04 and installed kurento 6.6.0. Also I have configured google stun server: stun4.l.google.com with port 19302.
Try disabling IPv6 in your box. Edit sysctl.conf with
sudo vi /etc/sysctl.conf
add the following at the bottom of the file
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
save and run sudo sysctl -p

How to sync time on host wake-up within VirtualBox?

I am running an Ubuntu 12.04-based box inside of Vagrant using VirtualBox. So far, everything is fine - except for one thing:
Let's assume that the VM is running. Then, the host goes to standby-mode. After waking it up again, the VM is still running, but its internal clock continues where it stopped when the host went down. So this basically means: Put the host to sleep for 15 minutes, wake it up again, then the VM's internal clock is 15 minutes late.
How can I fix this (setting the time manually is not an option for obvious reasons ;-))? Is there a way to run a script inside of a Vagrant VM whenever the host system changes its state?
I've read in the documentation that by default the VirtualBox Guest Additions sync the time with the host every 10 seconds. Apparently this is not happening, but I can not find any place where it is disabled. So any ideas?
PS: The Guest Additions are installed and match the version of VirtualBox being used.
The documentation lacks some details here.
What VirtualBox does every 10 seconds is just slight adjustement (something like 0.005 seconds). Only when the time difference reaches a threshold (20 minutes by default) a "real" resync is done.
You can reduce the thresold (i.e. to 10 seconds) with the following command:
VBoxManage guestproperty set <vm-name> "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold" 10000
Summarizing answers of #zilupe and #Slobodan Kovacevic, solution is to add following to Vagrantfile:
config.vm.provider 'virtualbox' do |vb|
vb.customize [ "guestproperty", "set", :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold", 1000 ]
end
This will synchronize clocks each time when desync becomes > 1s (1000ms)
I give an other solution to sync time between guest & host without installing Virtualbox guest addition:
install ntp on your guest, and de-comment these lines in /etc/ntp.conf:
disable auth
broadcastclient
Then, restart ntp with service ntp restart
Active broadcast on your host:
For Linux users, edit your /etc/ntp.conf file and configure broadcast (you must adapt IP):
broadcast 192.168.123.255
For Windows users, activate the "Windows Time" service. You can then read this page to configure it to broadcast time
Then, restart time service on host.
For me to get timesync working I had to do this:
vboxmanage setextradata «machine-name» "VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" 0
It turns the timesync on. It was, for some reason, off.
I found a solution:
install ntpdate
add "s" permission for ntpdate, this allows non-root users to run ntpdate as root: sudo chmod u+s /usr/sbin/ntpdate
add one line in ~/.bashrc: ntpdate -u ntp.ubuntu.com
After that, each time you login to the linux system, the time will be sync once.
you can install the VirtualBox Guest Additions in the VM to sync the time automatically by VB.

Running EC2 instance suddenly refuses SSH connection

I've set up the EC2 instance couple days ago and even last night I was able to SSH to it with no problems. Today morning, I can't ssh to it. Port 22 is already open in the security group and I haven't changed anything since last night.
Error:
ssh: connect to host [ip address] port 22: Connection refused
I had similar issue recently and i couldn't figure out why it was happening, so I had to create a new instance, set it up again, and connect and configure all EBS storages to the new one. Took me couple hours... and now it's happening again. In the previous one, I've installed denyhost, which might have blocked me, but in the current one, there are only apache2, and mysql running.
The current instance has been up for 16 hours now, so I don't think it's because it didn't finish booting... Also, port 22 is open to all sources (0.0.0.0/0) and is using tcp protocol.
Any ideas?
Thanks.
With the help of #abhi.gupta200297, we were able to resolve it.
The issue was the error in /etc/fstab, and sshd was supposed to be started after fstab is successful. But it wasn't, hence, the sshd wouldn't start and that's why it was refusing the connection. Solution was to create a temporary instance, mount the root EBS from the original instance, and comment out stuff from the fstab and voila, it's letting me connect again. And for the future, I just stopped using fstab and created bunch of shell commands to mount the EBS volumes to directories and added them in /etc/init.d/ebs-init-mount file and then run update-rc.d ebs-init-mount defaults to initialize the file and I'm no longer having issues with locked ssh.
UPDATE 4/23/2015
Amazon team created a video tutorial of similar issue and show how to debug using this method: https://www.youtube.com/watch?v=_P29ZHu_feU
Looks like sshd might have stopped for some reason. Is the instance EBS backed? if thats the case, try shutting it down and starting it back up. That should solve the problem.
Also, are you able to ssh from AWS web console? They have a java plugin there to ssh into the instance.
For those of you who came across this post because you are unable to SSH into your EC2 instance after a reboot, this is cross-posted to a similar question at serverfault:
From the AWS Developer Forum post on this topic:
Try stopping the broken instance, detaching the EBS volume, and
attaching it as a secondary volume to another instance. Once you've
mounted the broken volume somewhere on the other instance, check the
/etc/sshd_config file (near the bottom). I had a few RHEL instances
where Yum scrogged the sshd_config inserting duplicate lines at the
bottom that caused sshd to fail on startup because of syntax errors.
Once you've fixed it, just unmount the volume, detach, reattach to
your other instance and fire it back up again.
Let's break this down, with links to the AWS documentation:
Stop the broken instance and detach the EBS (root) volume by going into the EC2 Management Console, clicking on "Elastic Block Store" > "Volumes", the right-clicking on the volume associated with the instance you stopped.
Start a new instance in the same region and of the same OS as the broken instance then attach the original EBS root volume as a secondary volume to your new instance. The commands in step 4 below assume you mount the volume to a folder called "data".
Once you've mounted the broken volume somewhere on the other instance,
check the "/etc/sshd_config" file for the duplicate entries by issuing these commands:
cd /etc/ssh
sudo nano sshd_config
ctrl-v a bunch of times to get to the bottom of the file
ctrl-k all the lines at the bottom mentioning "PermitRootLogin without-password" and "UseDNS no"
ctrl-x and Y to save and exit the edited file
#Telegard points out (in his comment) that we've only fixed the symptom. We can fix the cause by commenting out the 3 related lines in the "/etc/rc.local" file. So:
cd /etc
sudo nano rc.local
look for the "PermitRootLogin..." lines and delete them
ctrl-x and Y to save and exit the edited file
Once you've fixed it, just unmount the volume,
detach by going into the EC2 Management Console, clicking on "Elastic Block Store" > "Volumes", the right-clicking on the volume associated with the instance you stopped,
reattach to your other instance and
fire it back up again.
This happened to me on a Red Hat EC2 instance because these two lines were being automatically appended the end of the /etc/ssh/sshd_config file every time I launched my instance:
PermitRootLogin without-passwordUseDNS no
One of these append operations was done without a line break, so the tail of the sshd_config file looked like:
PermitRootLogin without-passwordUseDNS noPermitRootLogin without-passwordUseDNS no
That caused sshd to fail to start on the next launch. I think this was caused by the bug reported here: https://bugzilla.redhat.com/show_bug.cgi?id=956531 The solution was to remove all the duplicate entries at the bottom of the sshd_config file, and add extra line breaks at the end.
Go to your AWS management console > select instance > right click and select "Get System Logs"
This will list what went wrong.
Had the same issue, but sys logs had this:
Starting sshd: /var/empty/sshd must be owned by root and not group or world-writable.
[FAILED]
Used the same steps described above to detach volume and attach to connectable instance. Then used:
sudo chmod 755 /var/empty/sshd
sudo chown root:root /var/empty/sshd
(https://support.microsoft.com/en-us/help/4092816/ssh-fails-because-var-empty-sshd-is-not-owned-by-root-and-is-not-group)
Then detached and reattached to original EC2 Instance and could now access via ssh.
I got similar ssh locked out by detach an EBS but forgot to modify the /etc/fstab
If your ubuntu has systemd, you can edit /lib/systemd/system/local-fs.target and comment out the last two lines:
#OnFailure=emergency.target
#OnFailureJobMode=replace-irreversibly
I haven't tested this extensively and don't know if there are any risks or side effects involved, but so far it works like a charm. It mounts the root volume and all other volumes (except those that are misconfigured, obviously), then it continues the boot process until SSH is up, so you can connect to the instance and fix the incorrect fstab entries.
In my case, the volume was out of space and a service was failing to start. I used the AWS tutorial (from Sherzod's post) to mount it on a good EC2 instance and clean it up and remove the service from startup before remounting it and verifying that things worked.
For me it was that my IP had changed. Hope this helps someone. Navigate to the security groups and update your My IP in the inbound rules.
I had the same issue not able to connect to the aws instance with permission denied error.
I was able to connect with aws team on screen share call and they guided me to change the folder permission on the aws instance using the following user meta script.
steps :
stop the instance
Actions > Instance setting > Edit user meta
Enter the below script and save
**Content-Type: multipart/mixed; boundary="//" MIME-Version: 1.0
--// Content-Type: text/cloud-config; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config cloud_final_modules:
[scripts-user, always]
--// Content-Type:
text/x-shellscript; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash chown root:root /home chmod 755 /home chmod 700 /home/ubuntu chmod 700 /home/ubuntu/.ssh chmod 600 /home/ubuntu/.ssh/authorized_keys ls -ld /home /home/ubuntu /home/ubuntu/.ssh /home/ubuntu/.ssh/authorized_keys chown ubuntu:ubuntu /home/ubuntu -R
--//**
save and connect to the instance with correct pem key.
Resolved my problem
*change ubuntu to your instance username

fabric appears to start apache2 but doesn't

I'm using fabric to remotely start a micro aws server, install git and a git repository, adjust apache config and then restart the server.
If at any point, from the fabfile I issue either
sudo('service apache2 restart') or run('sudo service apache2 restart') or a stop and then a start, the command apparently runs, I get the response indicating apache has started, for example
[ec2-184-73-1-113.compute-1.amazonaws.com] sudo: service apache2 start
[ec2-184-73-1-113.compute-1.amazonaws.com] out: * Starting web server apache2
[ec2-184-73-1-113.compute-1.amazonaws.com] out: ...done.
[ec2-184-73-1-113.compute-1.amazonaws.com] out:
However, if I try to connect, the connection is refused and if I ssh into the server and run
sudo service apache2 status it says that "Apache is NOT running"
Whilst sshed in, if run
sudo service apache start, the server is started and I can connect. Has anyone else experienced this? Or does anyone have any tips as to where I could look, in log files etc to work out what has happened. There is nothing in apache2/error.log, syslog or auth.log.
It's not that big a deal, I can work round it. I just don't like such silent failures.
Which version of fabric are you running?
Have you tried to change the pty argument (try to change shell too, but it should not influence things)?
http://docs.fabfile.org/en/1.0.1/api/core/operations.html#fabric.operations.run
You can set the pty argument like this:
sudo('service apache2 restart', pty=False)
Try this:
sudo('service apache2 restart',pty=False)
This worked for me after running into the same problem. I'm not sure why this happens.
This is an instance of this issue and there is an entry in the FAQ that has the pty answer. Unfortunately on CentOS 6 doesn't support pty-less sudo commands and I didn't like the nohup solution since it killed output.
The final entry in the issue mentions using sudo('set -m; service servicename start'). This turns on Job Control and therefore background processes are put in their own process group. As a result they are not terminated when the command ends.
When connecting to your remotes on behalf of a user granted enough privileges (such as root), you can manage system services as shown below:
from fabtools import service
service.restart('apache2')
https://fabtools.readthedocs.org/en/0.13.0/api/service.html
P.S. Its requires the installation of fabtools
pip install fabtools
Couple of more ways to fix the problem.
You could run the fab target with --no-pty option
fab --no-pty <task>
Inside fabfile, set the global environment variable always_use_pty to False, before your target code executes
env.always_use_pty = False
using pty=False still didn't solve it for me. The solution that ended up working for me is doing a double-nohup, like so:
run.sh
#! /usr/bin/env bash
nohup java -jar myapp.jar 2>&1 &
fabfile.py
...
sudo("nohup ./run.sh &> nohup.out", user=env.user, warn_only=True)
...