I'm using IBM Bluemix and Docker.
[My goal] I want to create a container. I found from the website that we could use SSH to login as "root" user. So I guess I could also install maven and MySQL on this container. Though IBM Container is a Docker based file system, we could treat container just like a Linux virtual machine (please correct me if wrong).
I found a similar question here, where njleviere said that port 22 is closed. How do I determine if a port is open or closed? If it's closed, how do I open it? Also, I think that port 22 is actually open in my case.
[Problem Description] I mainly followed this website, but I'm using Ubuntu and SSH instead of Putty.
First, I create the key file with ssh-keygen. For the filename, I tried "cloud" and "cloud.key". Both failed. So I think the name for the key does not matter (please correct me if wrong).
I open the .pub key. There is a "yu#yu-VirtualBox" tag at the end of the key file. I am not sure if I should include this tag. So I tried several things:
ssh-rsa KeyString yu#yu-VirtualBox
ssh-rsa KeyString
KeyString
All failed.
Then I created the container. I choose the "ibmliberty". Given the public IP I created before (already unbind from any containers), I added 22 to the public Port. And pasted the "cloud.pub" to the SSH key. After several minutes, the container started to run. The following two links are the screen shot for the Bluemix console on creating the container.
Then I could see the default page for port 9080 in browser for https://169.44.124.121:9080. It said "Welcome to Liberty" and "WebSphere Application Server V8.5.5.9".
Then I typed (cloud and cloud.pub is the key file)
ssh -i cloud root#169.44.124.121
Then I get the
ssh: connect to host 169.44.124.121 port 22: Connection refused
I used cf ic ps to check the port. It looks fine.
I see 169.44.124.121:22->22/tcp under the PORTS.
Also, I see many programmers use the docker file to launch the IBM Container. Should I switch to docker file instead of this IBM console web interface?
The default ibm-liberty image on bluemix doesn't include sshd. You could add it - you'll need to add supervisord, sshd, and the appropriate configuration for both into your Dockerfile.
Conversely, if what you really want is just a secure command line connection into your container, you can use cf ic exec or docker exec. (e.g. cf ic exec -ti mycontainername bash ) That'll give you a command line without having the overhead (and security exposure) of a running sshd.
I am creating ec2 instances and configuring them using ansible scripts. I have used
[ssh_connection]
pipelining=true
in my ansible.cfg file but it still asks to verify the ssh fingerprint, when I type yes and press enter it fails to login to the instance.
Just to let you know I am using ansible dynamic inventory and hence am not storing IPs or dns in hosts file.
Any help will be much appreciated.
TIA
Pipelining doesn't have any effect on authentication - it bundles up individual module calls into one bigger file to transfer over once a connection has been established.
In order not to stop execution and prompt you to accept the SSH key, you need to disable strict host key checking, not enable pipelining.
You can set that by exporting ANSIBLE_HOST_KEY_CHECKING=False or set it in ansible.cfg with:
[defaults]
host_key_checking=False
The latter is probably better for your use case, because it's persistent.
Note that even though this is a setting that deals with ssh connections, it is in the [defaults] section, not the [ssh_connection] one.
==
The fact that when you type yes you fail to log in makes it seem like this might not be your only problem, but you haven't given enough information to solve the rest.
If you're still having connection issues after disabling host key checking, edit the question to add the output of you SSHing into the instance manually, alongside the output of an ansible play with -vvv for verbose output.
First steps to look through when troubleshooting:
What are the differences between when I connect and when Ansible does?
Is the ansible_ssh_user set to the right user for the ec2 instance?
Is the ansible_ssh_private_key_file the same as the private part of the keypair you assigned the instance on creation?
Is ansible_ssh_host set correctly by whatever is generating your dynamic inventory?
I think you can find the answer here: ansible ssh prompt known_hosts issue
Basically, when you run ansible-playbook, you will need to use the argument:
ANSIBLE_HOST_KEY_CHECKING=False
Make sure you have your private key added (ssh-add your_private_key).
I have two amazon ec2 instances
i can connect to those ec2 instance from my windows using putty (by the public key generated from the private key provided by amazon)
now i want to install tungsten replicator into my ec2 instances
and tungsten replicator needs ssh access from one ec2 instance to another ec2 instance
i tried to check that ssh is working or not from one ec2 instance to another
i tried:
ssh ec2-user#public ip of destination instance
//also tried
ssh ec2-user#private ip destination instance
but its not working
i got following error:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
i have search on google and tried some trick but none of them worked
sometime i got following error:
Address public_ip maps to xxxx.eu-west-1.compute.amazonaws.com, but this does not map back to the address - POSSIBLE BREAK-IN ATTEMPT!
can anyone please tell me how to connect ssh from one ec2 instance to another
I'd suggest you to create a special keypair for the tungsten user.
cd tungsten-user-home/.ssh
ssh-keygen -t rsa
mv id-rsa.pub authorized-keys
And then copy both files to the other host in the same place and permissions.
This will allow tungsten to work without requiring your own key.
Just like when you have to ssh from you local machine to an EC2 instance, you need to provide the ssh command the proper pem file:
ssh -i my_pem_file.pem ec2-user#private-or-public-ip-or-dns
Just in case anyone ponder on this question, here is my 2 cents.
Connecting one EC2 instance from another EC2 instance will work as suggested by "Uri Agassi". Considering best practices and security, it will be good idea to create and assign a role to source EC2 instance.
One way to allow one EC2 instance to connect to another is to set an ingress rule on the target EC2 instance that lets it accept traffic from the source EC2 instance's security group. Here's a Python function that uses Boto3 to do this:
import boto3
ec2 = boto3.resource('ec2')
def allow_security_group_ingress(target_security_group_id, source_security_group_name):
try:
ec2.SecurityGroup(target_security_group_id).authorize_ingress(
SourceSecurityGroupName=source_security_group_name)
logger.info("Added rule to group %s to allow traffic from instances in "
"group %s.", target_security_group_id, source_security_group_name)
except ClientError:
logger.exception("Couldn't add rule to group %s to allow traffic from "
"instances in %s.",
target_security_group_id, source_security_group_name)
raise
After you've set this, put the private key of the key pair on the source instance and use it when you SSH from the source instance:
ssh -i {key_file_name} ec2-user#{private_ip_address_of_target_instance}
There's a full Python example that shows how to do this on GitHub /awsdocs/aws-doc-sdk-examples.
See, if you have deployed both machines with the same key pair, or different, it's not a problem just go to your host ec2 machine and in .ssh folder make a key file with the same name of the key that is used to create the second machine, now use chmod 400 keypair name and then try ssh -i keyname user-name#IP
This shouldn't be this hard. I cannot connect to new AWS EC2 instance via SSH clients. I am connecting from a Win 7 box.
Instance OS: Debian 6
AMI: debian-squeeze-i386-20121119-e4554303-3a9d-412e-9604-eae67dde7b76-ami-1977f070.1(ami-a121a6c8)
User: tried root and also ec2-user
Using .pem keypair that AWS generated and I downloaded
Confirmed security group and Key Pair Name on instance
SSH port 22 is OPEN: Nmap says so and Telnet gets a welcome reply
Using 3 different clients: all clients connect ok
PuTTY replies: Server refused our key
MindTerm Java browser add-in replies: Authentication failed, permission denied
Bitvise SSH replies: Attempting 'publickey' auth; auth failed;
Rebooted instance, wash, rinse, repeat...
REBUILT new instance and new keypair, wash, rinse, repeat...
Connecting isn't the issue. Why would the instance not accept the .pem file as the password? Is there an additional step I am missing? I followed EVERY frigging guide I could Google. AWS support is a joke. stackoverflow to the rescue...
TIA.
According to the debian wiki which has documentation on the AMI you are using, the username you need to use to login is 'admin'.
I have had many issues with connecting to EC2 via ssh.
ssh -i the-keypair-filename root#yourdomain.com
- Keypair file must be in same directory.
- I just used terminal to connect.
Make sure you generate or assign the keypair when launching the instance.
Also you can verify the keypair you have set in the AWS Management Console, this is done by selecting the running instance and then looking for "Key Pair Name:".
I hope this is helpful.
My problem was that I didn't add a volume that was expected in the fstab file so the server didn't start fully and the sshd daemon wasn't running.
Check with:
telnet HOST 22
Check the server logs to make sure it starts properly before you waste lots of time like I did.
Amazon Linux AMIs that use ec2-user password are listed at the bottom of this page.
http://aws.amazon.com/amazon-linux-ami/
Check that you are using one of those if trying to use ec2-user, or check the documentation for the AMI you are using.
Teri
Try using the "admin" username and ignore the username suggested by Amazon.
I had the similar problem and I have solved the issue by following approach.
1) Edited the knife.rb file in my chef folder i.e. :\Users\Administrator\chef-starter\chef-repo.chef\knife.rb as bellow:
knife[:aws_access_key_id] = "xxxxxxxxxxxxxxxxxxxx"
knife[:aws_secret_access_key] = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
knife[:region] = 'ap-southeast-1'
knife[:aws_ssh_key_id] = "ChefUser"
knife[:ssh_user]="ec2-user"
In the command prompt, issued the command to create an ec2-server:
knife ec2 server create -r "role[webserver]" --image ami-abcd1234 --flavor t1.micro -G ChefClient -x root -N server01 -i H:\Chef-files\ChefUser.pem
Note that, even though I had given all the details in the knife.rb file, I had to give the .pem file path in coomand line through -i option. That solved my problem.
Check, if the solution of mine helps you.
Cheers,
Chandan
Logging in as "ubuntu" worked for me:
ssh -i private_key.pem ubuntu#myubuntuserver
Hope this helps
--Erin
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I am trying to connect to remote server via ssh but getting connection timeout.
I ran the following command
ssh testkamer#test.dommainname.com
and getting following result
ssh: connect to host testkamer#test.dommainname.com port 22: Connection timed out
but if try to connect on another remote server then I can login successfully.
So I think there is no problem in ssh and other person try to login with same login and password he can successfully login to server.
Please help me
Thanks.
Here are a couple of things that could be preventing you from connecting to your Linode instance:
DNS problem: if the computer that you're using to connect to your
remote server isn't resolving test.kameronderdehamer.nl properly
then you won't be able to reach your host. Try to connect using the
public IP address assigned to your Linode and see if it works (e.g.
ssh user#123.123.123.123). If you can connect using the public IP
but not using the hostname that would confirm that you're having
some problem with domain name resolution.
Network issues: there
might be some network issues preventing you from establishing a
connection to your server. For example, there may be a misconfigured
router in the path between you and your host, or you may be
experiencing packet loss. While this is not frequent, it has
happenned to me several times with Linode and can be very annoying.
It could be a good idea to check this just in case. You can have a look
at Diagnosing network issues with MTR (from the Linode
library).
That error message means the server to which you are connecting does not reply to SSH connection attempts on port 22. There are three possible reasons for that:
You're not running an SSH server on the machine. You'll need to install it to be able to ssh to it.
You are running an SSH server on that machine, but on a different port. You need to figure out on which port it is running; say it's on port 1234, you then run ssh -p 1234 hostname.
You are running an SSH server on that machine, and it does use the port on which you are trying to connect, but the machine has a firewall that does not allow you to connect to it. You'll need to figure out how to change the firewall, or maybe you need to ssh from a different host to be allowed in.
EDIT: as (correctly) pointed out in the comments, the third is certainly the case; the other two would result in the server sending a TCP "reset" package back upon the client's connection attempt, resulting in a "connection refused" error message, rather than the timeout you're getting. The other two might also be the case, but you need to fix the third first before you can move on.
I got this error and found that I don't have my SSH port (non standard number) whitelisted in config server firewall.
Just adding this here because it worked for me. Without changing any settings (to my knowledge), I was no longer able to access my AWS EC2 instance with: ssh -i /path/to/key/key_name.pem admin#ecx-x-x-xxx-xx.eu-west-2.compute.amazonaws.com
It turned out I needed to add a rule for inbound SSH traffic, as explained here by AWS. For Port range 22, I added 0.0.0.0/0, which allows all IPv4 addresses to access the instance using SSH.
Note that making the instance accessible to all IPv4 addresses is a security risk; it is acceptable for a short time in a test environment, but you'll likely need a longer term solution.
If you are on Public Network, Firewall will block all incoming connections by default. check your firewall settings or use private network to SSL
The possibility could be, the SSH might not be enabled on your server/system.
Check sudo systemctl status ssh is Active or not.
If it's not active, try installing with the help of these commands
sudo apt update
sudo apt install openssh-server
Now try to access the server/system with following command
ssh username#ip_address
This happens because of firewall connection.
Reset your firewall connection from your hosting website.
It will start working.
After connecting to the server again add this to your (ufw) security
sudo ufw allow 22/tcp
There can be many possible reasons for this failure.
Some are listed above. I faced the same issue, it is very hard to find the root cause of the failure.
I will recommend you to check the session timeout for shh from ssh_config file.
Try to increase the session timeout and see if it fails again
My VPN connection was not enabled. I was trying all possible way to open up the Firwall and Ports until I realized, I am working from home and my VPN connection was down.
But yes, Firewall and ssh configurations can be a reason.
Try connecting to a vpn, if possible. That was the reason I was facing problem.
Tip: if you're using an ec2 machine, try rebooting it. This worked for me the other day :)
I had this issue while trying to ssh into a local nextcloud server from my Mac.
I had no issues ssh-ing in once, but if I tried to have more than one concurrent connection, it would hang until it timed out.
Note, I was sshing to my user#public-ip-address.
I realized the second connection only didn't work when I tried to ssh into it when on the same network, ie my home network
Furthermore, when I tried ssh user#server-domain it worked!
The end fix was to use ssh user#server-domain rather than ssh user#public-ip
I have experienced a couple of nasty issues that lead to these errors, and these are different from everyone else's answer here:
Wrong folder access rights. You need to have specific directory permissions on you ssh folders and files.
a. The .ssh directory permissions should be 700 (drwx------).
b. The public key (.pub file) should be 644 (-rw-r--r--).
c. The private key (id_rsa) on the client host, and the authorized_keys file on the server, should be 600 (-rw-------).
Nasty docker network configuration. This just happened to me on an AWS EC2 instance. It turned out that I had a docker network with an ip range that interfered with the ssh access granted by the security group and VPC. The docker network's range was e.g. 192.168.176.0/20 (i.e. a range from 192.168.176.1->192.168.191.254), whereas the security group had a range of 192.168.179.0/24; interfering with the SSH access.
I had this error when trying to SSH into my Raspberry pi from my MBP via bash terminal. My RPI was connected to the network via wifi/wlan0 and this IP had been changed upon restart by my routers DHCP.
Check IP being used to login via SSH is correct. Re-check IP of device being SSH'd into (in my case the RPI), which can be checked using hostname -I
Confirm/amend SSH login credentials on "guest" device (in my case the MBP) and it worked fine in my attempt.
I faced a similar issue. I checked for the below:
if ssh is not installed on your machine, you will have to install it firstly. (You will get a message saying ssh is not recognized as a command).
Port 22 is open or not on the server you are trying to ssh.
If the control of remote server is in your hands and you have permissions, try to disable firewall on it.
Try to ssh again.
If port is not an issue then you would have to check for firewall settings as it is the one that is blocking your connection.
For me too it was a firewall issue between my machine and remote server.I disabled the firewall on the remote server and I was able to make a connection using ssh.
my main machine is windows 10 and I have CEntOS 7 VBox
Search in your main machine for "known_hosts"
usually, known_host location in windows in "user/.ssh/known_host"
open it using notepad and delete the line where your centos vbox ip
then try connect in your terminal
in mac os user you can find known_hosts in "~/.ssh/known_hosts"
Make sure to ask the admin to authorize your device.
On Linux run:
sudo zerotier-cli listnetworks
if it returns status ACCESS DENIED ask the admin to authorize your node. This is mentioned here.
https://discuss.zerotier.com/t/solved-cant-join-network/1919
This issue is also caused if the Dynamic Host Configuration Protocol is not set-up properly.
To solve this first check if your IP Address is configured using
ping ipaddress,
If there is no packet loss and the IP Address is working fine try any other solution. If there is no response and you have 100% packet loss, it means that your IP Address is not working and not configured.
Now configure your IP Address using,
sudo dhclient -v devicename
To check your device you can use the 'ip a' command
For eg. My device was usb0 since I had connected the device through usb
This will configure an IP Address automatically and you can even see which one is configured. You can again check with the 'ip a' command to confirm.
This may be very case specific and work in some cases only but
check to see if you were previously connecting through some VPN software/application.
Try connecting again to the VPN. Worked in my case.
This happened to me after enabling port 22 with "sudo ufw allow ssh". Before that, I was getting a refusal from my machine when entering with ssh from another one. After enabling it, I thought it would work, but instead it showed the message "connection timed out". As I had just installed Ubuntu with the option of installing basic functions alongside, I checked whether I had the openssh-server with the command sudo apt list --installed | grep openssh-server. It turned out that Ubuntu had installed by defect the openssh-client instead. I uninstalled it and installed the openssh-server following the basic commands:
sudo apt-get purge openssh-client
sudo apt update
sudo apt install openssh-server
After that, a simple "sudo ufw allow ssh" worked perfectly and I was finally able to access the machine with an ssh command.
What worked for me was that i went to my security group and reset my IP and it worked
Here are some considerations which i took to resolve a similar issue that I had:
Port 22
IGW (Internet Gateway)
VPC
Scene 1> This is for port 22 not enabled with right configurations. If the port is set to custom or myip, the probable scene is this won't work.
Scene 2> When you delete the internet gateway, the network is created and the instance will be functional too, but the routing from the internet will not work. Hence make sure that if there is a VPC, it has an Internet Gateway attached.
Scene 3> Check the VPC for the subnet associations and routing table entries. This might probably tell you the cause. I found one in this kind of troubleshooting. The route used to land up in a "blackhole" (shows up in the route table section of the console). To fix this I had to check and find out my internet gateway and found the issue with the IGW.
Moral of the story: always trace backward in the network!
In my case I'm on windows, I reset my firewall settings, and it fixed
If you get any error check the basic a version control request with ssh -V and If it is not installed, install it with the sudo apt-get install openssh-server command.
Check your virtual machine ssh connection with sudo service ssh status at console.
Check "Active" rows and if write a inactive(dead) the console write sudo service ssh start
Result: Now you can check your connection with sudo service ssh status command and send ssh connection request.
Reset the firewall and reboot your VPS from your hosting service, it will start working perfectly fine
check whether accidentally you have deleted the default vpc or default subnets ,while creating your own vpc and subnets.
I have done this mistake while creating vpc, hence got this error while connecting via ssh.
alos check whether u have attched IGW to public subnets.
Its not complicated.
First, go disable your firewall(USE YOUR CONTROL PANEL)after you check if your openssh is active.
Disable firewall, then use putty or any alternative to basically disable using this command sudo ufw disable
try now
Update the security group of that instance. Your local IP must have updated. Every time it’s IP flips. You will have to go update the Security group.