Jenkins slave using an ssh gateway - ssh

I have jenkins running on master.com, and want to have a slave running on slave.com. However, to ssh to the slave I need to go through gateway.com. Generally to ssh to this machine from my normal account, I just use ~/.ssh/config to set up a ProxyCommand.
I have replicated this setup in my /var/lib/jenkins/.ssh/config file:
Host slave.com
User felix
ProxyCommand ssh felix#gateway.com nc %h %p
I have public key authentication set up for both the gateway and the slave, such that from the command line I can ssh directly from jenkins#master.com to felix#slave.com simply by doing ssh slave.com.
Unfortunately Jenkins does not seem to respect my .ssh/config setup, and the connection times out (the slave is not reachable directly). The Jenkins slave log file is:
java.io.IOException: There was a problem while connecting to slave.com:22
....
Caused by: java.net.ConnectException: Connection timed out
How can I figure out whether or not jenkins is respecting my .ssh/config file? Am I missing a step in configuring the master jenkins account or the .ssh/config file for jenkins?

Instead of using Jenkin's built in SSH implementation, use the "Launch slave by execution of command on the Master". You can then use the regular ssh command, and take advantage of .ssh/config like you're used to. If you click the ? button next to the option, it should give you all the details you need.

The jenkins-ssh-slaves plugin uses the trilead SSH2 implementation written in Java.
~/.ssh/config is only read by OpenSSH, and won't affect how Jenkins connects to your slaves.

Related

Why i can’t connect to Gitlab on a container via SSH?

I have installed a local Gitlab in Portainer CE on Asustor.
I used putty to try to connect via SSH with the right port to Gitlab but it’s respond with “no supported authentication methods available (server sent publickey)”.
I try to create a new SSH key, and i put it on gitlab without success. On the Asustor the service of SSH is active in fact when i try to connect via SSH to the Asustor, it’s responde correctly. I used port 22 for SSH of Asustor and port 49165 for SSH of Gitlab. Anyone can help me?
Thanks
Check first if you have a GIT_SSH environment variable which would not use ssh.exe from Windows (C:\Windows\System32\OpenSSH\ssh.exe) or from Git (C:\Program Files\Git\usr\bin\ssh.exe)
In general, there is no need for PuttyGen to manage SSH connections on Windows.
If your GitLab runs in a Docker container on your NAS, you might need to map its internal SSH port to the host (Asustor) port 22, as described in this thread.

Jmeter SSH sampler can't establish socket error

OS: Mac OS Sierra
So I installed Jmeter and SSH sampler plugin. I want to access api host to send execute command on it via ssh. I am able to connect to this host via ssh using my ssh key, but can't do the same using Jmeter SSH Sampler.
Keep getting:
'Failed to connect to server: timeout: socket is not established'
.
Currently I'm out of ideas, how would I achieve this? Are there any peculiarities or additional settings I'm missing?
Adding screens with settings. I am filling in IP address of host, which I can access, just can't publish it.
The key is in OpenSSH format.
response_screen sampler_settings
Remote Commands: Linux/MacOSX
Linux, Unix and MacOSX operating systems can be remotely accessible (in majority of cases) through SSH (Secure Shell) protocol. To accomplish that
we can use the JMeter SSH Sampler plugin.
Installation:
Download ApacheJMeter_ssh-x.x.x.jar and jsch-x.x.x.jar from the SSH Sampler Releases page.
Drop Apache_JMeter_ssh-x.x.x.jar to /lib/ext folder of your JMeter installation.
Drop jsch-x.x.x.jar to /lib folder of your JMeter installation.
Restart JMeter.
You should see 2 new Samplers: SSH Command and SSH SFTP.
Your login can fail due to DDS key being rejected. As per OpenSSH 7.0 release notes:
Support for ssh-dss, ssh-dss-cert-* host and user keys is disabled
by default at run-time. These may be re-enabled using the
instructions at http://www.openssh.com/legacy.html
Alternatively you can specify path to id_rsa private key file
I don't think SSH Command sampler understands ~ shorthand, you should be providing full path to your SSH private key
You have to provide your SSH username as well
In any case make sure you can reach the port 22 of the machine where the SSH server is running from the machine where JMeter is running.
More information: How to Run External Commands and Programs Locally and Remotely from JMeter

View Activemq Messages with Jolokia and Hawt.io

Though browsing several websites and here on stack overflow, there seems to be a way to view the messages in an Activemq queue using Jolokia and Hawt.io, but I have been unsuccessful to this point.
We are running our Activemq (version 5.12.0) as in embedded service in our Spring Webapp and exposed the Jolokia web services as explained in this webpage:
https://jolokia.org/reference/html/agents.html#agent-war-programmatic
When looking that the Jolokia web services via Hawt.io, I can not figure out how to actually view the messages in the queue.
Here is a screenshot showing the queue size:
So, how can I view the messages in an Activemq queue using Jolokia and Hawt.io?
The solution we ended up going with didn't actually use Jolokia or Hawt.io.
We ended up using Jconsole.
When looking at ActiveMQ queues, if you used a java serialized object in the queue, the data won't be very readably, but if you serialize your object to json, it is quite easy to see what is in the queue.
It is terribly important to read these directions all the way though, carefully.
These instructions discuss SSH Tunneling and it is quite easy to mess something up and there are not very good log messages when things go wrong.
Remote Debugging
Due to security reasons, we have closed all the open debug ports on our remote virtual machines.
To get remote debugging to work, we will need to use SSH Tunneling to access the remote virtual machine debugging ports.
Remote Application Setup
The application that you want to remotely debug must have the JPDA Transport connector enabled.
After Java 1.4, to enable the JPDA Transport, add the following vm parameter when starting your java virtual machine:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=<remote_port_number>
The above attributes are hard to describe, but what is presented above works well. More information about the above attributes can be found on the Connection and Invocation Details page.
Local IDE Setup
In Intellij to connect to a remote java virtual machine, open the "Run/Debug Configurations" window.
Then select a new "Remote" configuration.
Enter the following values:
Debugger mode
Attach to remote JVM
Host
localhost
Port
<local_port_number>*
Use module classpath
<local_package>**
The <port_number> should be the local port number of the ssh tunneling session that you will be starting. It is recommended that the <remote_port_number> and the <local_port_number> are the same value.
** This value should be whatever your local project is named.
SSH Tunneling
To actually connect to the remote debugging port, we'll need to use SSH Tunneling.
Run the following command via a terminal command line:
$ ssh -L <local_port_number>:localhost:<remote_port_number> -f <username>#<remote_server_name> -N
Example:
$ ssh -L 10001:localhost:10001 -f <your_username>#<your.server.com> -N
This command does the following:
Starts an ssh session with the <remote_server_name>.
Connects your <local_port_number> to the <remote_port_number> of the localhost of the remote machine. In this case, we're saying connect to localhost:10001 of the <your.server.com> machine.
Start remote debugging in the Intellij IDE and you should then be connected to the remote java virtual machine.
Resources
Intellij IDEA remotely debug java console program
Remote debug of a Java App using SSH tunneling (without opening server ports)
Remote JMX
We use JMX to look at the Spring Integration Kaha DB Queues.
Remote Application Setup
Add the following vm parameters:
-Dcom.sun.management.jmxremote.port=64250
-Dcom.sun.management.jmxremote.rmi.port=64250
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Djava.rmi.server.hostname=127.0.0.1
The jmxremote.port and jmxremote.rmi.port can be any number and they can be different values, it just helps if they are the same value when doing the ssh tunneling below.
SSH Tunneling
$ ssh -L 64250:localhost:64250 -f <your_username>#<your.server.com> -N
JConsole Setup
This is done in a new terminal window.
$ jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=64250 service:jmx:rmi:///jndi/rmi://127.0.0.1:64250/jmxrmi
Resources
Why Java opens 3 ports when JMX is configured?
Clean Up
To close the ssh processes above:
$ lsof -i tcp | grep ^ssh
Then perform a kill on the process id.
Using jps and jstack to Help Debug
List all java processes running on a machine:
$ sudo jps
List the threads of an application running:
$ sudo -u <process_owner> jstack <process_id>
Example:
$ sudo -u tomcat jstack <pid>

Executing a shell script on a remote host through ssh which inturn runs ssh to another host to apply database changes

I am trying to run a shell script on a remote host (A) through ssh. The shell script inturn uses SSH to connect to another host (B) to perform some database related operations. However it looks the agent is not being forwarded when connecting to Host B and I see a connection refused.
This is equivalent to executing ssh command as follows
ssh -A some#A.com "ssh some#B.com 'ls'"
I have used -A to enable agent forwarding but still no luck.
Please let me know how to achieve this? Thanks.
So silly of me. Firewall issue rather than ssh issue. Host A had no permission to connect to Host B. Should have thought this through! Thanks.

How to use ansible with two factor authentication?

I have enabled two factor authentication for ssh using duosecurity (using this playbook https://github.com/CoffeeAndCode/ansible-duo ).
How can I use ansible to manage the server now. The SSH calls fail at gathering facts because of this. I want the person running the playbook to enter the two factor code before the playbook is run.
Disabling two factor for the deployment user is a possible solution but creates a security issue which I would I like to avoid.
It's a hack, but you can tunnel a non-2fac Ansible SSH connection through a 2fac-enabled SSH connection.
Overview
We will setup two users: ansible will be the user Ansible will use. It should be authenticated in a way that's supported by Ansible (i.e., not 2fac). This user will be restricted so it cannot connect from anywhere but 127.0.0.1, so it is not accessible from outside the machine.
The second user, ansible_tunnel will be open to the outside world, but will be authenticated by two factors, and will only allow tunneling of SSH connections to the local machine.
You must be able to configure 2-factor authentication only for some users (not all).
Some info on SSH tunnels.
On the target machine:
Create two users: ansible and ansible_tunnel
Put your public key in ~/.ssh/authorized_keys of both users
Set the shell of ansible_tunnel to /bin/false, or lock the user - it will be used for tunneling exclusively, not running commands
Add the following to /etc/ssh/sshd_config:
AllowTcpForwarding no
AllowUsers ansible#127.0.0.1 ansible_tunnel
Match User ansible_tunnel
AllowTcpForwarding yes
PermitOpen 127.0.0.1:22
ForceCommand echo 'This account can only be used for tunneling SSH sessions'
Setup 2-factor authentication only for ansible_tunnel
Restart sshd
On the machine running Ansible:
Before running Ansible, run the following (on the Ansible machine, not the target):
ssh -N -L 8022:127.0.0.1:22 ansible_tunnel#<host>
You will be authenticated using two factors.
Once the tunnel is up (check with netstat), run Ansible with ansible_ssh_user=ansible, ansible_ssh_port=8022 and ansible_ssh_host=localhost.
Recap
Only ansible_tunnel can connect from the outside, and it will be authenticated using two factors
Once the tunnel is set up, connecting to port 8022 on the local machine is the same as connecting to sshd on the remote machine
We're allowing ansible to connect over SSH only when it is done through the localhost, so only connections that are tunneled are allowed
Scale
This will not scale well for multiple server, due to the need to open a separate tunnel for each machine, which requires manual action. However, if you've chosen 2-factor authentication for your servers you're already willing to do some manual action to connect to each server, and this solution will only add a little overhead with some script-wrapping.
[EDITED TO ADD]
Bonus
For convenience, we may want to log into the maintenance account directly to do some manual work, without going through the process of setting up a tunnel. We can configure SSH to require 2fac authentication in this case, while maintaining the ability to connect without 2fac through the tunnel:
# All users must authenticate using two factors
AuthenticationMethods publickey,keyboard-interactive
# Allow both maintenance user and tunnel user with no restrictions
AllowUsers ansible ansible_tunnel
# The maintenance user is allowed to authenticate using a single factor only
# when connecting from a local address - it should be impossible to connect to
# this user using a single factor from the outside (the only way to do that is
# having an existing access to the machine, or use the two-factor tunnel)
Match User ansible Address 127.0.0.1
AuthenticationMethods publickey
I can use ansible with ssh and 2FA using the ControlMaster feature of ssh and ansible.
My local ssh client is configured to dump a ControlPath socket for multiplexing connection. Ansible is configured to use the same socket.
Local ssh client
This configuration enable multiplexing for all connections. I personally store this configuration in `~/.ssh/config:
Host *
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p.socket
ControlPersist 1m
When a connection is established, a socket appears in the $HOME/.ssh directory. This socket persists during one minute after disconnection.
Configure ansible
Ansible is configured to re-use the local socket.
Add this in your ansible configuration file (for instance, ~/.ansible.cfg):
[ssh_connection]
control_path=~/.ssh/master-%%r#%%h:%%p.socket
Note the double % for variable substitution.
Usage
Connect to your server using ssh regular command (ssh user#server), and perform 2FA;
Launch your ansible command as usual.
The step 2 must be performed within the ControlPersist configuration, or keep an ssh connection in a terminal when you launch ansible command in another one.
You can also force to close connection when you do not need it, using: ssh -O exit user#server.
Note that, if you open a third terminal and run ssh user#server, you will not be asked for credentials: the connection established in 1. will be re-used.
Drawbacks
In case of bad network conditions
Sometimes, when you loose connection, the socket persists. Every further connection hangs. You must manually disconnect this connection, using ssh -O exit user#server. This is the only known drawback for this method.
References:
Ansible parameter ANSIBLE_SSH_CONTROL_PATH
About multiplexing ssh (a very old blog post which makes me discover ssh multiplexing: https://blog.scottlowe.org/2015/12/11/using-ssh-multiplexing/)
Solution using a Bastion Host
Even using an ssh bastion host it took me quite a while to get this working. In case it helps anyone else, here's what I came up with. It uses the ControlMaster ssh config options and since ansible uses regular ssh it can be configured to use the same ssh features and re-use the connection to the bastion host regardless of how many connections it opens to remote hosts. I've seen these Control options recommended in general (presumably for performance reasons if you have a lot of hosts) but not in the context of 2FA to a bastion host.
With this approach you don't need any sshd config changes, so you'll want AuthenticationMethods publickey,keyboard-interactive as the only authentication method setting on the bastion server, and publickey only for all your other servers that you're proxying through the bastion to get to. Since the bastion host is the only one that accepts external connections from the internet, it's the only one that requires 2FA, and internal hosts rely on agent forwarding for public key authentication but don't use 2FA.
On the client, I created a new ssh config file for my ansible environment in the top-level directory that I run ansible from (so sibling of ansible.cfg) called ssh.config. It contains:
Host bastion-persistent-connection
HostName <bastion host>
ForwardAgent yes
IdentityFile ~/.ssh/my-key
ControlMaster auto
ControlPath ~/.ssh/ansible-%r#%h:%p
ControlPersist 10m
Host 10.0.*.*
ProxyCommand ssh -W %h:%p bastion-persistent-connection -F ./ssh.config
IdentityFile ~/.ssh/my-key
Then in ansible.cfg I have:
[ssh_connection]
ssh_args = -F ./ssh.config
A few things to note:
My private subnet in this case is 10.0.0.0/16 which maps to the host wildcard option above. The bastion proxies all ssh connections to servers on this subnet.
This is a bit brittle in that I can only run my ssh or ansible commands in this directory, because of the ProxyCommand passing the local path to this config file. Unfortunately I don't think there's an ssh variable that maps to the current config file being used so that I could pass the same config file to the ProxyCommand automatically. Depending on your environment it might be better to use an absolute path for this.
The one gotcha is it makes running ansible more complex. Unfortunately, from what I can tell ansible has no support whatsoever for 2FA. So if you have no existing ssh connection to the bastion, ansible will print out Verification code: once for every private server it's connecting to, but it's not actually listening for the input so no matter what you do the connections will fail.
So I first run: ssh -F ssh.config bastion-persistent-connection
This creates the socket file in ~/.ssh/ansible-*, and the ssh agent locally will close & remove that socket after the configurable time (what I have set to 10m).
Once the socket is open I can run ansible commands like normal, e.g. ansible all -m ping and they succeed.