I have spent a few hours hunting for the problem here.
I have been working with Macs for years now and never had this problem, and have ssh into eC2 instances thousands of times.
I recently received at work a new MacBook Pro.
SSH runs as a service, meaning here it does not return any error that it is not found.
But no matter what server or EC2 instance I try to ssh into, as I have done a million times before I get a timeout.
Before you ask, I have looked all over for this problem. I have also looked for the normal ~/.ssh directory, which seems to be missing and therefore cannot find any config file.
The following is the Mac info:
Catalina 10.15.2
Model Name: MacBook Pro
Model Identifier: MacBookPro16,1
Processor Name: 8-Core Intel Core i9
Processor Speed: 2.3 GHz
Number of Processors: 1
Total Number of Cores: 8
L2 Cache (per Core): 256 KB
L3 Cache: 16 MB
Hyper-Threading Technology: Enabled
Memory: 16 GB
Boot ROM Version: 1037.60.58.0.0 (iBridge: 17.16.12551.0.0,0)
Serial Number (system): C02ZNMV5MD6N
Hardware UUID: 27B1EDF5-B1D2-5F86-BD12-D646F36D9D2D
Activation Lock Status: Enabled
ETA: Yes, from a Windows machine I can access the EC2 network. Yes, I have the correct PEM file. And yes, I have made sure security groups in AWS are correct. For some reason the normal ssh -i etc. picked up directly from AWS connect for the EC2 instance always times out.
Crazy question: does the ssh in Catalina demand another command, addition or some other parameter besides -i?
(I do not seem to be able to ping, telnet etc. either. So something seems to be preventing the OS from going out on ssh port 22.)
Does anyone know of or has had this problem and a fix for it? I am fairly sure it is some type of configuration in ssh or in the Network configurations.
It is driving me crazy. Any and all help would be greatly appreciated!
New Macbook Pro owner/user here. Same issue, despite all configs being identical to my Windows 10 pc and Ubuntu 20 laptop.
For some reason, this doesn't work for me on my Macbook, but does work on Windows and Ubuntu.
ssh -i path-to-keyfile.pem user#ipaddress
But creating an SSH config file and adding my AWS keyfile to my keychain works:
open ~/.ssh/config if the config file exists, or touch ~/.ssh/config if not.
Edit this config file as follows:
Host *
AddKeysToAgent yes
UseKeychain yes
IdentityFile ~/.ssh/id_rsa
Note: I don't know for sure, but I imagine only the 'AddKeysToAgent' and 'UseKeychain' parts are what's important. I'm using the 'IdentityFile' part for connecting to my git repos.
Save the config file and exit. Next, make sure your keyfile isn't too open, otherwise you won't be able to add it to your keychain:
chmod 600 path-to-keyfile.pem
Finally, add the keyfile to your keychain:
ssh-add -K path-to-keyfile.pem
Now on Mac, I'm able to ssh into my AWS instance without the -i flag:
ssh aws-username#aws-ipaddress
Hope this helps. I found the solution here: https://www.cloudsavvyit.com/1795/how-to-add-your-ec2-pem-file-to-your-ssh-keychain/
PS - I'm also unable to sFTP into AWS using Filezilla on Mac, so I'm looking into this as well.
Update on Filezilla: A bit bizarre and I haven't figured out how to save my settings, but for now this answer works: https://superuser.com/questions/280808/filezilla-on-mac-sftp-with-passwordless-authentication
I want to use Homebrew's version of Apache on my Mac so I can start/stop it as I please, so I've been trying all afternoon to stop and unload several httpd processes that are run by user _www on macOS 10.13.3 High Sierra without any luck.
See the screenshot attached, there are 6 httpd processes run by _www and the single process run under root (Homebrew service)
I've tried
sudo launchctl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist
which gives me an error:
/System/Library/LaunchDaemons/org.apache.httpd.plist: Could not find specified service
I've also tried tracking the process starting them down, switching user to _www (a no no) so I can see where it's being started and so far I'm having no luck.
I want to kill them all, and have them stay dead as right now they're conflicting with the server I'm actually trying to run. Anyone cleverer than me out there that knows how to kill this literal http demon?
https://stackoverflow.com/a/20439859/996338
Try this:
sudo launchctl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist
This will stop a running instance of Apache, and record that it should not be restarted. It records your preference in /private/var/db/launchd.db/com.apple.launchd/overrides.plist.
For a single session (meaning, between reboots), you can use sudo apachectl stop.
I have a relatively simple requirement: I want the clock on the CentOS guests that I create under KVM to be synchronized with their CentOS host from the very first boot of the VMs.
It's easy enough to synchronize them with NTP after they are up and running. However, if the host's clock and the VM's clock are widely different when NTP starts, it can cause a large jump in the VM's time. Many of our applications running under the VMs do not handle this time jump well, so we want to prevent this from happening.
So my question is how can I configure my VMs to start with the same time as their host? In the test I just ran, my host's time was 14:00 PDT. A VM I created under that host came up with an initial time of 21:00 PDT. This was adjusted by NTP to 14:00 PDT shortly after it started to 14:00 PDT, matching the host's time, and subsequent reboots of the VM always had the correct time. The problem only occurs on the first boot. I want the VM to come up with 14:00 PDT one the very first boot to avoid the NTP time jump.
Okay, I've answered my own question. The combination of settings that I used to give me the results I need are:
Set the hwclock on the host and to use UTC time. This is done with the --utc option of the hwclock command. I run the following command on my host OS:
hwclock --utc --set --date="time-string"
Tell CentOS that the hwclock is using UTC via the file /etc/adjtime. For example, you could initialize this file using
echo -e "0.0 0 0.0\n0\n\nUTC" >/etc/adjtime
Create this file on both the host and your guest VMs. I create the file on my guests before I boot them for the first time by directly accessing the guest file system from the host.
Set the time zone you want for your system time. Again, do this on both your host and your guests:
ln -sf /usr/share/zoneinfo/time-zone /etc/localtime
echo "ZONE=time-zone" >/etc/sysconfig/clock
export TZ=time-zone
where time-zone is a standard CentOS time zone string, for example "US/Pacific".
Set the system time on your host based on the hwclock. The --utc option is needed to tell CentOS that the hwclock is in UTC time. It will take the UTC time and set your system time based on the TZ environment variable:
hwclock --utc --hctosys
The steps above are all done once, when you are configuring your host and guests. To keep time synced on all your servers after they are up and running you'll want to configure NTP on your host and guests.
I do not understand why I cannot "vagrant up" anymore after I run some provisioning scripts (I use ansible).
[default] Waiting for machine to boot. This may take a few minutes...
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.ssh.timeout" value) time period. This can
mean a number of things.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.ssh.timeout") value.
What can be broken? What is vagrant trying to do when running vagrant up?
Connecting with the default user, i.e. "vagrant" ?
Obviously it is related to SSH.
Yes I pushed some SSH keys but I do not think I changed the vagrant user at all so "vagrant up" should still work right? I also changed a little bit the /etc/sudoers file but I tried without modifying it and it does not seem to work anyway...
Well, I am running out of ideas..
Thanks.
My 2 cents
Set vbox.gui to true in your Vagrantfile, this will help you to see if the box is booted correctly. It cloud get stuck during the boot process, however, vagrant is expecting an exit code from it, it timed out and you get what you saw.
After seeing the error message, run vagrant ssh and see what you get.
NOTE: you may need to enable debug to see more info: VAGRANT_LOG=debug vagrant up
BTW: make sure your vagrant (1.3.5) + VirtualBox (4.3.2) stays current.
Actually, this might not be an SSH issue. It sounds like your VM is hanging when you vagrant up and it may be the result of networking issues that can be cleared by restarting the networking when on your VM. Try the steps below to fix.
First, edit your Vagrantfile and add vb.gui = true to bring up your VM in a GUI mode. For example, my test Vagrantfile looks like:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "testbox"
config.vm.network :private_network, ip: "192.168.50.102"
config.vm.provider :virtualbox do |vb|
vb.gui = true
end
end
Second, issue a vagrant up and your VM GUI will appear. Your vagrant will still hang but you should be able to log into your VM box via the GUI.
Third, open a terminal window and issue the following command:
sudo /etc/init.d/network restart
This should resolve your issue. You could issue a vagrant reload to refresh the VM.
Here's a reference to the vagrant hanging issue: https://github.com/mitchellh/vagrant/wiki/%60vagrant-up%60-hangs-at-%22Waiting-for-VM-to-boot.-This-can-take-a-few-minutes%22
(Note, my VM was a CentOS / VirtualBox instance.)
Starting a 64-bit vagrant box in a host not supporting virtualization leads to this error, you will see it if you start the VM in GUI mode.
I am using bento/ubuntu-16.04 and following steps solved my problem.
You need to change the following in
i. Go to this directory: cd ~/.vagrant.d/boxes/bento-VAGRANTSLASH-ubuntu-16.04/2.3.1/virtualbox
ii. Open box.ovf file using nano box.ovf
iii). Change <Adapter slot="0" enabled="true" MACAddress="080027C30A85" type="82540EM">
To
<Adapter slot="0" enabled="true" MACAddress="080027C30A85" cable="true" type="82540EM">
You might need to reboot your machine. To reboot follow these steps.
i. vagrant halt
ii. vagrant up
iii. vagrant ssh
I have error The difference between the request time and the current time is too large when call method amazons3.ListObjects
ListObjectsRequest request = new ListObjectsRequest() {
BucketName = BucketName, Prefix = fullKey
};
using (ListObjectsResponse response = s3Client.ListObjects(request))
{
bool result = response.S3Objects.Count > 0;
return result;
}
What it could be?
The time on your local box is out of sync with the current time. Sync up your system clock and the problem will go away.
For those using Vagrant, a vagrant halt followed by vagrant up worked for me.
The clock is out of sync.
I followed the steps in this post to get it working again, but also had to run the following command.
sudo ntpdate ntp.ubuntu.com
sudo apt-get install ntp
If at any time you get a message saying the NTP socket is still in use, stop it with sudo /etc/init.d/ntp stop and re-run your command.
I had the same error and I'm using Docker for Mac. Simply restarting Docker worked for me.
On WSL 2 or any Deb-based Linux (Ubuntu, Mint ...):
Check date:
date
Now run:
sudo apt install ntpdate
sudo ntpdate time.nist.gov
Output example:
18 Feb 14:27:36 ntpdate[24008]: step time server 132.163.97.4 offset 1009.140848 sec
Check date again:
date
Alternatively look for correctClockSkew option in AWS CLI/SDK config, and set it to true
For those using Docker in Windows try restarting the Docker Engine in Setting->Reset->Restart Docker.
In case anyone finds this using Laravel and Homestead, simply running
homestead halt
followed by
homestead up
And you're good to go again.
2021 answer:
AWS.config.update({
accessKeyId: 'xxx',
secretAccessKey: 'xxxx',
correctClockSkew: true
});
As other's have said, your local clock is out of sync with AWS. You can keep it synced to Amazon's servers directly using NTP so you won't have to worry about clock drift now or in the future.
Note: The below instructions are for *nix users. I've added a comment with how you might do it in Windows, but as a non-Windows user I can't verify their accuracy.
To install NTP, simply choose one of the following, depending on your distribution:
apt-get install ntp
or
yum install ntp
etc.
Configure NTP to use Amazon servers, like so:
vim /etc/ntp.conf
And in it, comment out the default servers and add these:
server 0.amazon.pool.ntp.org iburst
server 1.amazon.pool.ntp.org iburst
server 2.amazon.pool.ntp.org iburst
server 3.amazon.pool.ntp.org iburst
Restart ntp service:
sudo service ntp restart
Source: https://allcloud.io/blog/how-to-fix-amazon-s3-requesttimetooskewed/
And a more general article on keeping your time synchronized with NTP:
https://www.digitalocean.com/community/tutorials/how-to-set-up-time-synchronization-on-ubuntu-12-04
This can also be caused by using async/await with the construction of the request object outside the task and the actual call to AWS inside the task. If there are lots of tasks running and the task isn't scheduled in time, or there is some other operation delaying the actual call to AWS, this exception may be thrown. This is more common than you might guess because the default task scheduler does not process tasks in FIFO order, resulting in starvation for some tasks, especially under heavy load.
This reset my system clock correctly on OSX. S3 uploads using the JS SDK works for me now in local dev
ntpdate us.pool.ntp.org
Read more about this here
if this problem in you localhost for windows 10
set time automatically ON and set time zone automatically ON
this solve my problem.
If you get this error in windows follow these steps to solve your problem.. Change your local time setting:
step 1: click on change date and time settings
step 2: from the popup Date and Time window click on Internet Time Tab
step 3: next Click on Change Settings
step 4: from the Server drop down select time.nist.gov or check this website
step 5: click on OK
Restart your console and check. It works...
For those facing same problem on Microsoft WLS2 Ubuntu, the only workarounds right now are:
sudo hwclock -s
Or
wsl --shutdown
Clock offset is occurring after waking up Windows from sleep. Keep an eye on https://github.com/microsoft/WSL/issues/5324 for fix from microsoft.
If you're working with a VM, restarting the VM just works on mine
If you are using a virtualbox, the time into virtual machine is sync with the time of the real machine. Just fix the time into the virtual machine will not fix the problem.
I had this error because my local machine's time and timezone were set incorrectly. Changing them to the correct time and timezone worked for me.
I had same problem in Windows 10 with Docker. You should run this commands step for step
docker run --rm --privileged alpine hwclock -s
again
docker run --rm --privileged alpine hwclock -s
and last command , don't forget to set your username and password and your timezone, to run minIO while Docker is
docker run -p 9000:9000 -e "MINIO_ACCESS_KEY=yourUserName" -e "MINIO_SECRET_KEY=YourPassword" -e "TZ=Europe/Berlin" -v /etc/localtime:/etc/localtime:ro minio/minio server /data
It is a little crude but this worked for me
Did a curl to s3 server
curl s3.amazonaws.com -v
Then got this
* Trying 52.216.141.158...
* TCP_NODELAY set
* Connected to s3.amazonaws.com (52.216.141.158) port 80 (#0)
> GET / HTTP/1.1
> Host: s3.amazonaws.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 307 Temporary Redirect
< x-amz-id-2: q2wUOf5ZC7iu2ymbRWUpZaM6GpPLLf/irrntuw/JNB7QYxDzQvcLHQbsbF2dp5zT8rBrGwqnOz0=
< x-amz-request-id: T4H1W4WKBE3F39RM
< Date: Sat, 09 Oct 2021 19:21:24 GMT
< Location: https://aws.amazon.com/s3/
< Server: AmazonS3
< Content-Length: 0
<
* Connection #0 to host s3.amazonaws.com left intact
* Closing connection 0
Got this date
Sat, 09 Oct 2021 19:21:24 GMT
Set the date in ubuntu
sudo date --set "Sat, 09 Oct 2021 19:21:24 GMT"
My code stopped throwing exceptions
Now I have a script that does this periodically every month
To get rid of this problem, you have to adjust the client's timing so that there is a maximum time stamp difference of up to 15 minutes. Also set the standard time and zone for your system.
Check the full details here.
I have the exact same error message but it's not the same cause as any of the others above.
In my case I have a React browser app doing something like this:
import { Storage } from '#aws-amplify/storage'
...
await Promise.all(files.map(file => Storage.put(...)))
I am uploading a lot of files over a slow network connection.
With this code, the promises are all started at once, so the request time for all the requests is the same, but because the browser (or amplify?) is throttling the number of concurrent connections, the later requests don't actually hit the server until more than 15 minutes after they were created.
The solution is to limit the concurrency of the promise creation - e.g. use something like bluebird Promise.map with the concurrency option
Using ntp may not work on all version of your Linux based server (e.g. an out of date Ubuntu server version that is no longer supported which will block you from downloading ntp if it is not already installed).
If this is your situation, you can set independent time zones for your Linux VM:
https://community.rackspace.com/products/f/25/t/650
After you do this you may need to reset the time/date. Instructions for doing this are in this article:
http://codeghar.wordpress.com/2007/12/06/manage-time-in-ubuntu-through-command-line
If u are in 2016 and in Istanbul here is a weird situation that Turkey decided not to switch to winter time standards anyway set your local timezone to Moscow then restart your machine.
I ran into this issue running Jet (Codeship) and Terraform on MacOS using Docker for Mac Beta channel 1.13.1-beta42.
Failed to read state: Error reloading remote state: RequestTimeTooSkewed: The difference between the request time and the current time is too large.
status code: 403, request id: 9D32BA2A5360FC18
This was resolved by restarting Docker.
I've just started getting this error, and syncing my clock doesn't help. (I've spent 2 hours syncing it to every timeserver I can find, including the AWS servers, but nothing makes a difference.)
Exactly the same thing started happening a year ago on Dec 31 2017. In that case, rebooting my system, and rebuilding my server (that uses the aws java sdk) fixed it. I don't know why. I assumed that AWS had some end-of-year timezone peculiarity. It's also possible that while I was doing these things, AWS timeservers fixed themselves. I have no way to test that hypothesis.
Now, the same thing has suddenly started to happen on Dec 30, 2018. It's not right at year-end, but close enough to seem suspicious. (Never got this error except on these dates.) Rebooting and rebuilding isn't helping this time.
My dev environment on this box is Windows 10 under Parallels. Nothing else on my system has changed - as I've double-checked by rolling back to prior Parallels snapshots. The clocks on both my host MacOS and the virtual Windows 10 are correct.
I'm suspecting an AWS bug.
Rebooting my windows server fixed it for me
The time was identical to ~1 second to the site time.in, so it wasn't off.
I was running into the same issue on my Mac. When I moved to a different timezone(PST to IST), somehow OSX was not picking timezone and time change automatically. So I had to set the two manually and that caused a lag of some 15-20 seconds on my laptop. After setting the automatic sync, the time got synched and the S3 copy command started working: For reference
You can use this tool for organizing your time with AWS and local system.
To synchronize time:
sudo yum -y install chrony
sudo systemctl enable chronyd
sudo systemctl start chronyd
This issue generally occurs when s3cmd client machine time is not synced with server.
Check time of both machine.
either sync time between them using date command
Client# sudo date --set="string"
Client# sudo date --set="15 MAY 2011 1:40 PM"
or
install chrony and restart its service on both machine
Client# sudo apt-get install chrony
Client# vi /etc/chrony/chrony.conf
pool ntp-server iburst
Client# sudo systemctl restart chronyd