gitlab Mirroring repositories push to remote repository - ssh

I want to use SSH to automatically push my private Gitlab project to GitHub.com.
I configured ssh key with GitHub.com, and execute git clone git#github.com:my-project.git successfuly.
sudo ssh -vT git#github.com is ok
debug1: Authentication succeeded (publickey).
Authenticated to github.com ([20.205.243.166]:22).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
debug1: pledge: network
debug1: client_input_global_request: rtype hostkeys-00#openssh.com want_reply 0
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
Hi muxianliangqin! You've successfully authenticated, but GitHub does not provide shell access.
debug1: channel 0: free: client-session, nchannels 1
Transferred: sent 3572, received 2912 bytes, in 0.6 seconds
Bytes per second: sent 6123.4, received 4992.0
debug1: Exit status 1
but use gitlab -> Mirroring repositories push failed.
Here are some of my Settings:
Git repository URL=ssh://git#github.com/username/project.git
Mirror direction=push
detect host keys
Authentication method=SSH public key
error:
13:get remote references: create git ls-remote: exit status 128, stderr: "git#github.com: Permission denied (publickey).\r\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n".
What's my problem?
This my settings on GitHub
This my settings on gitlab.

The GitLab Mirroring documentation includes:
SSH authentication is mutual:
You must prove to the server that you’re allowed to access the repository.
The server must also prove to you that it’s who it claims to be.
If you’re mirroring over SSH (using an ssh:// URL), you can authenticate using:
Password-based authentication, just as over HTTPS.
Public key authentication. This method is often more secure than password authentication, especially when the other repository supports deploy keys
So double-check those settings.

Related

SSH port forwarding occasionally fails

I'm using SSH port forwarding to get to a DB behind a firewall. I use the following command (forwards remote 5432 port to local 5430):
ssh -i privatekey -v -N -A \
ec2-user#host -fNT -4 -L \
5430:rds-endpoint.us-west-2.rds.amazonaws.com:5432
This command always returns exit code 0, but approx. once in ten cases it doesn't actually open the tunnel and I get connection refused error when I try to connect to localhost:5430.
I've checked the debug output and noticed that there's one difference. The unsuccessful runs' debug output ends with this:
debug1: channel 0: new [port listener]
debug1: Requesting no-more-sessions#openssh.com
debug1: forking to background
while the successful runs have 3 more lines after the forking to background line:
debug1: Entering interactive session.
debug1: pledge: network
debug1: client_input_global_request: rtype hostkeys-00#openssh.com want_reply 0
So I assume SSH fails to "enter interactive session". Is there a way to fight this bug and make the port forwarding command reliable?

SCP timeout when running through a script when connected to a remote machine through ssh

I'm trying to run the following scenario, using TCL script -
Scenario -
Host A runs the TCL script. Host A script connects to Host B through ssh. Then the script invokes an scp file transfer from Host C (server) to Host B (client).
Problem -
The script doesn't actually implement a timeout scenario. However, scp fails with no error message exactly after 10 seconds(probably timeout). If done manually, i.e. Logging in to Host B from Host A, and then scp from Host C to Host B, there is no timeout observed, and the file transfer is successful.
Implemented the ssh connection from tcl script using "expect" package.
What could be the reason? Kindly suggest some solutions.
Thank You.
Did you set
RSAAuthentication yes
on Host C, add the public key of Host B's user to Host C's user authorized_keys file?
See https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys--2 for more details.
Simple test:
Try to run the scp manually (or try ssh): It shouldn't ask you for a password. Running ssh -v from Host B to Host C should include the following lines:
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /home/xyz/.ssh/id_rsa
debug1: Server accepts key: pkalg ssh-rsa blen 1047
debug1: Authentication succeeded (publickey).

SSH Connection closed by remote host : Having Security Group SSH Inbound permission set to specific IP address

I am trying to connect to AWS EC2 server from local system using SSH. It is connecting to instance when Security group Inbound permission for SSH is given as connect from anywhere. But whenever it is given specific IP address, it is giving Connection closed by remote host . I'm getting following error while connecting.
sudo ssh -vvv -i {$pemfile} ubuntu#{domain_name}
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to {HOST_NAME HERE} port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug3: Incorrect RSA1 identifier
debug3: Could not load "{KEY_PATH HERE}" as a RSA1 public key
debug1: identity file {KEY_PATH HERE} type -1
debug1: identity file {KEY_PATH HERE} type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
ssh_exchange_identification: Connection closed by remote host
Are you giving your IP correct? You might be using IP you get from something like http://formyip.com/ and what EC2 instance is getting might be different one.
Try following:
Change setting to: Allow all IP
SSH into your instance
See the logs at /var/log/auth.log (not very sure about this logfile location)
Identify your IP for last successful login attempt (which might or might not be same as you got from above website)
Use IP from logs (if different) in security group settings
TRY AGAIN :D
I didn't look into details but restarting the instance solved my problem simply.
Open your instance link in aws web interface and go to the "connect" tab, there shows the SSH connection information, make sure you are following it correctly.
My problem was using ec2-user as username instead ubuntu#ec2-my-ip.aws.com

Jenkins - can the "Execute Shell" execute SSH commands

Is it possible for the Jenkins "Execute shell" to execute SSH commands?
Jenkins has a number of pre and post build options which cater specifically for SSH type commands however i have a single script which does both build and then SCP and SSH commands. Is Jenkins forcing users to break up build scripts into multiple steps?
The "Execute Shell" is the one I'm trying to execute my SSH commands from however i've had no success.
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Trying private key: /var/lib/jenkins/.ssh/identity
debug1: Trying private key: /var/lib/jenkins/.ssh/id_rsa
debug1: Trying private key: /var/lib/jenkins/.ssh/id_dsa
debug1: Next authentication method: password
debug1: read_passphrase: can't open /dev/tty: No such device or address
debug1: Authentications that can continue: publickey,password
Permission denied, please try again.
debug1: read_passphrase: can't open /dev/tty: No such device or address
debug1: Authentications that can continue: publickey,password
Permission denied, please try again.
debug1: read_passphrase: can't open /dev/tty: No such device or address
debug1: Authentications that can continue: publickey,password
debug1: No more authentication methods to try.
Permission denied (publickey,password).
SSH Access not available for build engine
As long as you use a publickey, you'll be able to send commands via ssh and copy files via scp. We use this to spawn some specific processes and publish certain artifacts that can't be pushed via existing commands for various reasons.
It's necessary to be careful which keys you are using and what users you are addressing on the remote server. Often, we use explicit -i arguments in ssh and we always use explicit user names to make sure that everything goes as expected
ssh -i <key_path> <user>#<fqdn_host> <command>
If you do this in your script, you should be fine. Of course, the key file will have to be readable by your Jenkins process and you will need to make sure that the key is installed on both sides.
I would also strongly suggest using ssh's built-in policy controls to control:
Which hosts can use this key
What commands can be used by this key
In particular, you can use settings in the ~/.ssh/authorized_keys on the host that is the target of the ssh/scp command to limit the hosts that can attach (host=) and even pre-load the command so that particular key always executes just one particular command (command=).
For the truly adventurous, you can specify a command= and send the commands to a restricted shell command which limits either the directory access or command access.
Instead of explicitly executing ssh command from an "Execute shell" step, you could use one of existing Jenkins add-ons:
Publish Over SSH Plugin - execute SSH commands or transfer files over SCP/SFTP.
SSH plugin - execute SSH commands.

SSH: Connection closed by remote server

I am trying to ssh login to my remote server. But whenever i try to login through terminal using ssh command:
ssh root#{ip_address}
I get error:
Connection closed by {ip_address}
I checked hosts deny and hosts allow, there is nothing in the file. I am not getting why it happening?
It happened when i changed my workstation and key got changed. When i tried ssh login, it asked to add key and i entered yes and then it closed the connection.
Is there any way to get connected with ssh again?
Your help is appreciated.
Thank you.
Edit:
Output of ssh -v -v -v -v root#{ip_address} is
OpenSSH_5.9p1 Debian-5ubuntu1.1, OpenSSL 1.0.1 14 Mar 2012
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to {ip_address} [{ip_address}] port 22.
debug1: Connection established.
debug3: Incorrect RSA1 identifier
debug3: Could not load "/home/mona/.ssh/id_rsa" as a RSA1 public key
debug1: identity file /home/mona/.ssh/id_rsa type 1
debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048
debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048
debug1: identity file /home/mona/.ssh/id_rsa-cert type -1
debug1: identity file /home/mona/.ssh/id_dsa type -1
debug1: identity file /home/mona/.ssh/id_dsa-cert type -1
debug1: identity file /home/mona/.ssh/id_ecdsa type -1
debug1: identity file /home/mona/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1.1
debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1.1 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1.1
debug2: fd 3 setting O_NONBLOCK
debug3: load_hostkeys: loading entries for host "{ip_address}" from file "/home/mona/.ssh/known_hosts"
debug3: load_hostkeys: loaded 0 keys
debug1: SSH2_MSG_KEXINIT sent
Connection closed by 151.236.220.15
Had the same issue but a simple remote server reboot helped.
Are you sure your server is permitting root logins via SSH?
If not, I suggest using a different account with sudo privileges instead of enabling root login - especially if the server's SSH port is accessible from the whole inernet.
try sudo ssh root#{ip_address}, it works for me.
I tried to connect with a user, which had :/bin/false in /etc/passwd. After changing it to :/bin/bash the connection was not closed anymore.
I had a similar issue that was resolved by lowering the MTU on the client side with the following command:
ip li set mtu 1400 dev eth0
I found this solution from a separate thread on serverfault.
I was getting the same "Connection closed by {ip_address}" error on one of my SSH connections. I tried all the usual solutions and nothing worked. Finally I found that the ~/.ssh/authorized_keys file on the host was corrupted. Someone had tried to append a key to the file, but they copied and pasted it with embedded line feeds where each line wrapped at the end. So what should have been one continuous string spanning three lines was actually three separate strings -- one per line. Since the embedded line feed was exactly at the end of the line, it was not apparent from looking at it.
I deleted the offending key and added my own. Then everything worked as expected.
I temporarily disabled my antivirus firewall and this maybe helped a bit.
Now it suddenly says Shell access is not enabled on yr account! Connection closed.
So I logged into my WHM server.domain_name:2087 and clicked on Modify domain and enabled Shell Access for the website.
(Or ask your host provider to enable SSh for you if you do not have a WHM server)
Login success, it now says:
Last login: 03:37 from . [user#whm_domain_name ~]$
I myself had same problems while working with cloud9 editor. Mine was cause from high CPU usage. It would get fine after stopping apache connection.
Check the name being used to connect to the ftp site, its either wrong or multiple names are being sent for uthentication.