We have an application which uses SSH to copy artifact from one node to other. While creating the Docker image (Linux Centos 8 based), I have installed the Openssh server and client, when I run the image from Docker command and exec into it, I am successfully able to run the SSH command and I also see the port 22 enabled and listening ( $ lsof -i -P -n | grep LISTEN).
But if I start a POD/Container using the same image in the Kubernetes cluster, I do not see port 22 enabled and listening inside the container. Even if I try to start the sshd from inside the k8s container then it gives me below error:
Redirecting to /bin/systemctl start sshd.service Failed to get D-Bus connection: Operation not permitted.
Is there any way to start the K8s container with SSH enabled?
There are three things to consider:
Like David said in his comment:
I'd redesign your system to use a communication system that's easier
to set up, like with HTTP calls between pods.
If you put a service in front of your deployment, it is not going to relay any SSH connections. So you have to point to the pods directly, which might be pretty inconvenient.
In case you have missed that: you need to declare port 22 in your deployment template.
Please let me know if that helped.
Background
I have a machine in production running an elixir application (no access to iex, only to erl) and I am tasked with running an analysis on why we are consuming so much CPU. The idea here would be to launch observer, check the processes tab and see the processes with the most reductions.
How am I connecting?
To connect I am following a tutorial from a blog:
https://sgeos.github.io/elixir/erlang/observer/2016/09/16/elixir_erlang_running_otp_observer_remotely.html 1
Their instructions are as follows:
launch the app in the production machine with a cookie and a name
from local run: ssh user#public_ip "epmd -names" to get the name of the app and the port used
from local create a ssh tunnel to the remote machine: ssh -L 4369:user#public_ip:4369 -L 42877:user#public_ip:42877 user#public_ip (4369 is the epmd port by default, 42877 is the port of the app)
from local connect to the remote machine using the node's name: erl -name "user#app_name" -setcookie "mah_cookie" -hidden -run observer
Problem
And now in theory I should be able to use observer on the machine. Instead however I am greeted with the following error:
Protocol ‘inet_tcp’: register/listen error: epmd_close
So, after scouring the dark side of internet, I decided to use sudo journalctl -f to check all the logs of the machine and I found this:
channel 3: open failed: administratively prohibited: open failed
my_app_name sshd[8917]: error: connect_to flame#99.999.99.999: unknown host (Name or service not known)
/scripts/watchdog.sh")
my_app_name CRON[9985]: pam_unix(cron:session): session closed for user flame
Where:
erlang -name: my_app_name
machine user: flame
machine public ip: 99.999.99.999 (obviously not real)
so it tells me, unknown host ?? I am confused since 99.999.99.999 is the public IP of the machine itself!
Questions
What am I doing wrong?
I read that in older versions of erlang I can’t monitor a machine with observer if they are in different networks (which is the case, because I want to monitor this machine from my localhost) but I didn’t find any information regarding this in modern days.
If this is in fact impossible, what alternatives do I have?
Solution
After 3 days of non-stop searching, I finally found something that works.
To summarize I am putting it here everything I did.
All steps in local machine:
get the ports from the remote server:
> ssh remote-user#remote-ip "epmd -names"
epmd: up and running on port 4369 with data:
name super_duper_app at port 43175
create a ssh tunel with the ports:
ssh remote-user#remote-ip -L4369:localhost:4369 -L43175:localhost:43175
On another terminal in your local machine, run a iex terminal with the cookie the app in your remote server is using. Then connect to it and start observer:
iex --name observer#127.0.0.1 --cookie super_duper_cookie
Node.connect :"super_duper_app#127.0.0.1"
> true
:observer.start
With observer started, select the machine from the Nodes menu.
Possible setbacks
If you have tried this and it didn't work there are a few things you can check for:
Check if the EPMD port on your local machine is free, if not, kill the process using it and free it.
Check your ssh tunneling keys and configurations for permissions. As #Roberto Aloi pointed out this link can be useful: https://unix.stackexchange.com/questions/14160/ssh-tunneling-error-channel-1-open-failed-administratively-prohibited-open
Though browsing several websites and here on stack overflow, there seems to be a way to view the messages in an Activemq queue using Jolokia and Hawt.io, but I have been unsuccessful to this point.
We are running our Activemq (version 5.12.0) as in embedded service in our Spring Webapp and exposed the Jolokia web services as explained in this webpage:
https://jolokia.org/reference/html/agents.html#agent-war-programmatic
When looking that the Jolokia web services via Hawt.io, I can not figure out how to actually view the messages in the queue.
Here is a screenshot showing the queue size:
So, how can I view the messages in an Activemq queue using Jolokia and Hawt.io?
The solution we ended up going with didn't actually use Jolokia or Hawt.io.
We ended up using Jconsole.
When looking at ActiveMQ queues, if you used a java serialized object in the queue, the data won't be very readably, but if you serialize your object to json, it is quite easy to see what is in the queue.
It is terribly important to read these directions all the way though, carefully.
These instructions discuss SSH Tunneling and it is quite easy to mess something up and there are not very good log messages when things go wrong.
Remote Debugging
Due to security reasons, we have closed all the open debug ports on our remote virtual machines.
To get remote debugging to work, we will need to use SSH Tunneling to access the remote virtual machine debugging ports.
Remote Application Setup
The application that you want to remotely debug must have the JPDA Transport connector enabled.
After Java 1.4, to enable the JPDA Transport, add the following vm parameter when starting your java virtual machine:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=<remote_port_number>
The above attributes are hard to describe, but what is presented above works well. More information about the above attributes can be found on the Connection and Invocation Details page.
Local IDE Setup
In Intellij to connect to a remote java virtual machine, open the "Run/Debug Configurations" window.
Then select a new "Remote" configuration.
Enter the following values:
Debugger mode
Attach to remote JVM
Host
localhost
Port
<local_port_number>*
Use module classpath
<local_package>**
The <port_number> should be the local port number of the ssh tunneling session that you will be starting. It is recommended that the <remote_port_number> and the <local_port_number> are the same value.
** This value should be whatever your local project is named.
SSH Tunneling
To actually connect to the remote debugging port, we'll need to use SSH Tunneling.
Run the following command via a terminal command line:
$ ssh -L <local_port_number>:localhost:<remote_port_number> -f <username>#<remote_server_name> -N
Example:
$ ssh -L 10001:localhost:10001 -f <your_username>#<your.server.com> -N
This command does the following:
Starts an ssh session with the <remote_server_name>.
Connects your <local_port_number> to the <remote_port_number> of the localhost of the remote machine. In this case, we're saying connect to localhost:10001 of the <your.server.com> machine.
Start remote debugging in the Intellij IDE and you should then be connected to the remote java virtual machine.
Resources
Intellij IDEA remotely debug java console program
Remote debug of a Java App using SSH tunneling (without opening server ports)
Remote JMX
We use JMX to look at the Spring Integration Kaha DB Queues.
Remote Application Setup
Add the following vm parameters:
-Dcom.sun.management.jmxremote.port=64250
-Dcom.sun.management.jmxremote.rmi.port=64250
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Djava.rmi.server.hostname=127.0.0.1
The jmxremote.port and jmxremote.rmi.port can be any number and they can be different values, it just helps if they are the same value when doing the ssh tunneling below.
SSH Tunneling
$ ssh -L 64250:localhost:64250 -f <your_username>#<your.server.com> -N
JConsole Setup
This is done in a new terminal window.
$ jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=64250 service:jmx:rmi:///jndi/rmi://127.0.0.1:64250/jmxrmi
Resources
Why Java opens 3 ports when JMX is configured?
Clean Up
To close the ssh processes above:
$ lsof -i tcp | grep ^ssh
Then perform a kill on the process id.
Using jps and jstack to Help Debug
List all java processes running on a machine:
$ sudo jps
List the threads of an application running:
$ sudo -u <process_owner> jstack <process_id>
Example:
$ sudo -u tomcat jstack <pid>
I am having server issues with getting rabbit to cluster.
I boot up two nodes on ec2.
On the the first node booted I do this.
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl start_app
I boot another node.
sudo service rabbitmq-server stop
#Copy cookie from the first server booted
sudo su - -c 'echo -n "cookie" > /var/lib/rabbitmq/.erlang.cookie'
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl cluster rabbit#server1
1) sever1 is running
2) What ports to need open? I have 22, 4369, 5672
sudo rabbitmqctl cluster rabbit#aws-rabbit-server-east-development-20121102162143
Clustering node 'rabbit#aws-rabbit-server-east-development-20121103033005' with ['rabbit#aws-rabbit-server-east-development-20121102162143'] ...
Error: {no_running_cluster_nodes,['rabbit#aws-rabbit-server-east-development-20121102162143'],
['rabbit#aws-rabbit-server-east-development-20121102162143']}
What could possibility be missing from there docs or what what am I missing?
I had a similar problem on EC2 with two windows machines. I eventually got it working but I'm not sure I did it in the correct way so there may be a better solution.
The issue I found was that the two nodes could not see each other when trying to cluster. Each time you start a Rabbit node it seemed to be assigned a port number dynamically.
This obviously makes it very difficult to know which port to open up in the security group so to solve this, I restricted the range of ports Rabbit chose from when assigning the port. I restricted this to a range of 1 port on each node so I always know which port was being assigned.
The easiest way I found to do this was by editing the sbin\rabbitmq-service.bat file.
find the line -kernel inet_default_connect_options "[{nodelay,true}]" ^
add the following two lines to the file underneath:
-kernel inet_dist_listen_min ##### ^
-kernel inet_dist_listen_max ##### ^
replacing ##### with your chosen port number.
So you should now open up the following ports:
5672 - RabbitMQ’s listening port
4369 - Erlang Port Mapper Daemon
##### - the chosen port number for the Erlang nodes to communicate via
Because Erlang does not recognise FQDNs you may need to modify the hosts file on all the servers to make sure they are all able to resolve all the Erlang node name to an IP address, e.g.
123.123.123.111 NODE1
123.123.123.222 NODE2
once this is done you should then be able to see each node from the other. you can do this by using calling the following from the command line (replacing rabbit#NODE2 with whichever node you want to see)
rabbitmqctl status -n rabbit#NODE2
Hope this give you some help, I'm no expert but found this got things working for me!
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I am trying to connect to remote server via ssh but getting connection timeout.
I ran the following command
ssh testkamer#test.dommainname.com
and getting following result
ssh: connect to host testkamer#test.dommainname.com port 22: Connection timed out
but if try to connect on another remote server then I can login successfully.
So I think there is no problem in ssh and other person try to login with same login and password he can successfully login to server.
Please help me
Thanks.
Here are a couple of things that could be preventing you from connecting to your Linode instance:
DNS problem: if the computer that you're using to connect to your
remote server isn't resolving test.kameronderdehamer.nl properly
then you won't be able to reach your host. Try to connect using the
public IP address assigned to your Linode and see if it works (e.g.
ssh user#123.123.123.123). If you can connect using the public IP
but not using the hostname that would confirm that you're having
some problem with domain name resolution.
Network issues: there
might be some network issues preventing you from establishing a
connection to your server. For example, there may be a misconfigured
router in the path between you and your host, or you may be
experiencing packet loss. While this is not frequent, it has
happenned to me several times with Linode and can be very annoying.
It could be a good idea to check this just in case. You can have a look
at Diagnosing network issues with MTR (from the Linode
library).
That error message means the server to which you are connecting does not reply to SSH connection attempts on port 22. There are three possible reasons for that:
You're not running an SSH server on the machine. You'll need to install it to be able to ssh to it.
You are running an SSH server on that machine, but on a different port. You need to figure out on which port it is running; say it's on port 1234, you then run ssh -p 1234 hostname.
You are running an SSH server on that machine, and it does use the port on which you are trying to connect, but the machine has a firewall that does not allow you to connect to it. You'll need to figure out how to change the firewall, or maybe you need to ssh from a different host to be allowed in.
EDIT: as (correctly) pointed out in the comments, the third is certainly the case; the other two would result in the server sending a TCP "reset" package back upon the client's connection attempt, resulting in a "connection refused" error message, rather than the timeout you're getting. The other two might also be the case, but you need to fix the third first before you can move on.
I got this error and found that I don't have my SSH port (non standard number) whitelisted in config server firewall.
Just adding this here because it worked for me. Without changing any settings (to my knowledge), I was no longer able to access my AWS EC2 instance with: ssh -i /path/to/key/key_name.pem admin#ecx-x-x-xxx-xx.eu-west-2.compute.amazonaws.com
It turned out I needed to add a rule for inbound SSH traffic, as explained here by AWS. For Port range 22, I added 0.0.0.0/0, which allows all IPv4 addresses to access the instance using SSH.
Note that making the instance accessible to all IPv4 addresses is a security risk; it is acceptable for a short time in a test environment, but you'll likely need a longer term solution.
If you are on Public Network, Firewall will block all incoming connections by default. check your firewall settings or use private network to SSL
The possibility could be, the SSH might not be enabled on your server/system.
Check sudo systemctl status ssh is Active or not.
If it's not active, try installing with the help of these commands
sudo apt update
sudo apt install openssh-server
Now try to access the server/system with following command
ssh username#ip_address
This happens because of firewall connection.
Reset your firewall connection from your hosting website.
It will start working.
After connecting to the server again add this to your (ufw) security
sudo ufw allow 22/tcp
There can be many possible reasons for this failure.
Some are listed above. I faced the same issue, it is very hard to find the root cause of the failure.
I will recommend you to check the session timeout for shh from ssh_config file.
Try to increase the session timeout and see if it fails again
My VPN connection was not enabled. I was trying all possible way to open up the Firwall and Ports until I realized, I am working from home and my VPN connection was down.
But yes, Firewall and ssh configurations can be a reason.
Try connecting to a vpn, if possible. That was the reason I was facing problem.
Tip: if you're using an ec2 machine, try rebooting it. This worked for me the other day :)
I had this issue while trying to ssh into a local nextcloud server from my Mac.
I had no issues ssh-ing in once, but if I tried to have more than one concurrent connection, it would hang until it timed out.
Note, I was sshing to my user#public-ip-address.
I realized the second connection only didn't work when I tried to ssh into it when on the same network, ie my home network
Furthermore, when I tried ssh user#server-domain it worked!
The end fix was to use ssh user#server-domain rather than ssh user#public-ip
I have experienced a couple of nasty issues that lead to these errors, and these are different from everyone else's answer here:
Wrong folder access rights. You need to have specific directory permissions on you ssh folders and files.
a. The .ssh directory permissions should be 700 (drwx------).
b. The public key (.pub file) should be 644 (-rw-r--r--).
c. The private key (id_rsa) on the client host, and the authorized_keys file on the server, should be 600 (-rw-------).
Nasty docker network configuration. This just happened to me on an AWS EC2 instance. It turned out that I had a docker network with an ip range that interfered with the ssh access granted by the security group and VPC. The docker network's range was e.g. 192.168.176.0/20 (i.e. a range from 192.168.176.1->192.168.191.254), whereas the security group had a range of 192.168.179.0/24; interfering with the SSH access.
I had this error when trying to SSH into my Raspberry pi from my MBP via bash terminal. My RPI was connected to the network via wifi/wlan0 and this IP had been changed upon restart by my routers DHCP.
Check IP being used to login via SSH is correct. Re-check IP of device being SSH'd into (in my case the RPI), which can be checked using hostname -I
Confirm/amend SSH login credentials on "guest" device (in my case the MBP) and it worked fine in my attempt.
I faced a similar issue. I checked for the below:
if ssh is not installed on your machine, you will have to install it firstly. (You will get a message saying ssh is not recognized as a command).
Port 22 is open or not on the server you are trying to ssh.
If the control of remote server is in your hands and you have permissions, try to disable firewall on it.
Try to ssh again.
If port is not an issue then you would have to check for firewall settings as it is the one that is blocking your connection.
For me too it was a firewall issue between my machine and remote server.I disabled the firewall on the remote server and I was able to make a connection using ssh.
my main machine is windows 10 and I have CEntOS 7 VBox
Search in your main machine for "known_hosts"
usually, known_host location in windows in "user/.ssh/known_host"
open it using notepad and delete the line where your centos vbox ip
then try connect in your terminal
in mac os user you can find known_hosts in "~/.ssh/known_hosts"
Make sure to ask the admin to authorize your device.
On Linux run:
sudo zerotier-cli listnetworks
if it returns status ACCESS DENIED ask the admin to authorize your node. This is mentioned here.
https://discuss.zerotier.com/t/solved-cant-join-network/1919
This issue is also caused if the Dynamic Host Configuration Protocol is not set-up properly.
To solve this first check if your IP Address is configured using
ping ipaddress,
If there is no packet loss and the IP Address is working fine try any other solution. If there is no response and you have 100% packet loss, it means that your IP Address is not working and not configured.
Now configure your IP Address using,
sudo dhclient -v devicename
To check your device you can use the 'ip a' command
For eg. My device was usb0 since I had connected the device through usb
This will configure an IP Address automatically and you can even see which one is configured. You can again check with the 'ip a' command to confirm.
This may be very case specific and work in some cases only but
check to see if you were previously connecting through some VPN software/application.
Try connecting again to the VPN. Worked in my case.
This happened to me after enabling port 22 with "sudo ufw allow ssh". Before that, I was getting a refusal from my machine when entering with ssh from another one. After enabling it, I thought it would work, but instead it showed the message "connection timed out". As I had just installed Ubuntu with the option of installing basic functions alongside, I checked whether I had the openssh-server with the command sudo apt list --installed | grep openssh-server. It turned out that Ubuntu had installed by defect the openssh-client instead. I uninstalled it and installed the openssh-server following the basic commands:
sudo apt-get purge openssh-client
sudo apt update
sudo apt install openssh-server
After that, a simple "sudo ufw allow ssh" worked perfectly and I was finally able to access the machine with an ssh command.
What worked for me was that i went to my security group and reset my IP and it worked
Here are some considerations which i took to resolve a similar issue that I had:
Port 22
IGW (Internet Gateway)
VPC
Scene 1> This is for port 22 not enabled with right configurations. If the port is set to custom or myip, the probable scene is this won't work.
Scene 2> When you delete the internet gateway, the network is created and the instance will be functional too, but the routing from the internet will not work. Hence make sure that if there is a VPC, it has an Internet Gateway attached.
Scene 3> Check the VPC for the subnet associations and routing table entries. This might probably tell you the cause. I found one in this kind of troubleshooting. The route used to land up in a "blackhole" (shows up in the route table section of the console). To fix this I had to check and find out my internet gateway and found the issue with the IGW.
Moral of the story: always trace backward in the network!
In my case I'm on windows, I reset my firewall settings, and it fixed
If you get any error check the basic a version control request with ssh -V and If it is not installed, install it with the sudo apt-get install openssh-server command.
Check your virtual machine ssh connection with sudo service ssh status at console.
Check "Active" rows and if write a inactive(dead) the console write sudo service ssh start
Result: Now you can check your connection with sudo service ssh status command and send ssh connection request.
Reset the firewall and reboot your VPS from your hosting service, it will start working perfectly fine
check whether accidentally you have deleted the default vpc or default subnets ,while creating your own vpc and subnets.
I have done this mistake while creating vpc, hence got this error while connecting via ssh.
alos check whether u have attched IGW to public subnets.
Its not complicated.
First, go disable your firewall(USE YOUR CONTROL PANEL)after you check if your openssh is active.
Disable firewall, then use putty or any alternative to basically disable using this command sudo ufw disable
try now
Update the security group of that instance. Your local IP must have updated. Every time it’s IP flips. You will have to go update the Security group.