Duplicity is arguing BackendException: ssh connection to my server:22 failed: not a valid OPENSSH private key file - ssh

Thanks to maybeg, I've managed to backup my data from home to an external server. (An amazon one)
As i don't want to backup company datas to Amazon, i tried with an internal backup server.
I then used this command. (I have my own key)
docker run -d --name volumerize
-v /MyFolder/Keys/:/MyFolder/Keys/
-v jenkins_volume:/source:ro
-v backup_volume:/backup
-e 'VOLUMERIZE_SOURCE=/source'
-e "VOLUMERIZE_TARGET=scp://myuser#mybackupserver/home/myuser/"
-e 'VOLUMERIZE_DUPLICITY_OPTIONS=--ssh-options "-i /MyFolder/Keys/myuserkey"'
-e 'PASSPHRASE="mypassphrase"' blacklabelops/volumerize
When using duplicity backup command, inside or outside the container, i have the following error
/usr/lib/python2.7/site-packages/paramiko/ecdsakey.py:200: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.
signature, ec.ECDSA(self.ecdsa_curve.hash_object())
BackendException: ssh connection to myuser#mybackupserver:22 failed: not a valid OPENSSH private key file
Strangely, inside or outside the volumerize container, the following is running properly.
ssh -i /MyFolder/Keys/myuserkey myuser#mybackupserver
key_load_public: invalid format
Enter passphrase for key '/MyFolder/Keys/myuser':
[myuser#mybackupserver ~]$
Editing backup file for example is giving me the following :
#!/bin/bash
set -o errexit
source /etc/volumerize/stopContainers
duplicity $# --allow-source-mismatch --archive-dir=/volumerize-cache --ssh-options "-i /MyFolder/Keys/myuserkey" /source scp://myuser#mybackupserver/home/myuser/
source /etc/volumerize/startContainers
I've tried to check env variables inside the container, please find below what i have : (Note that passphrase has been added as env variable as found here)
HOSTNAME=b68f0e1a2d45
TERM=xterm
BLACKLABELOPS_HOME=/var/blacklabelops
GOOGLE_DRIVE_CREDENTIAL_FILE=/credentials/googledrive.cred
VOLUMERIZE_HOME=/etc/volumerize
VOLUMERIZE_SOURCE=/source
DOCKERIZE_VERSION=v0.5.0
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/etc/volumerize
VOLUMERIZE_TARGET=scp://myuser#mybackupserver/home/myuser/
PWD=/etc/volumerize
VOLUMERIZE_DUPLICITY_OPTIONS=--ssh-options "-i /MyFolder/Keys/myuserkey"
VOLUMERIZE_CACHE=/volumerize-cache
GPG_TTY=/dev/console
SHLVL=1
HOME=/root
no_proxy=*.local, 169.254/16
GOOGLE_DRIVE_SETTINGS=/credentials/cred.file
PASSPHRASE="mypassphrase"
_=/usr/bin/env
Can someone point me in the right direction ?
Regards,
pierre
Edit1 :
I tried to compare both private key file (Amazon and Company) using
openssl rsa -in yourkey.pem -check and both says
RSA key ok
writing RSA key
-----BEGIN RSA PRIVATE KEY-----
....
-----END RSA PRIVATE KEY-----
Edit2 :
1 . Had a look without any success at duplicity-backendexception
For information, Paramiko version is 2.2.1
Connection is successful using the following python script.
import paramiko
import StringIO
f = open('/MyFolder/Keys/myuserkey','r')
s = f.read()
keyfile = StringIO.StringIO(s)
mykey = paramiko.RSAKey.from_private_key(keyfile,password='mypassphrase')
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('mybackupserver',username='mouser',pkey=mykey)
stdin, stdout, stderr = ssh.exec_command('uptime')
stdout.readlines()
[u' 12:35:27 up 3 days, 1:42, 0 users, load average: 1.59, 3.10, 3.00\n']

try the pexpect+scp:// backend (more on available ssh backends can be found in the duplicity manpage http://duplicity.nongnu.org/duplicity.1.html ).
it uses the command line ssh binaries. maybe the error is different or more detailed there?
the error on
ssh -i /MyFolder/Keys/myuserkey myuser#mybackupserver
key_load_public: invalid format
does not seem normal. try to provide the public key in the proper format or not at all.
..ede/duply.net

Related

Load key "/root/.ssh/pipelines_id": invalid format

I am using bitbucket pipeline to deploy app on a ec2-server.
here is my bitbucket-pipelines.yaml file
image: atlassian/default-image:3
pipelines:
branches:
dev:
- step:
name: automated deployment
script:
- pipe: atlassian/scp-deploy:1.2.1
variables:
USER: 'ubuntu'
SERVER: $SERVER_IP
REMOTE_PATH: '/home/ubuntu/utags-test/server'
LOCAL_PATH: '${BITBUCKET_CLONE_DIR}/*'
- pipe: atlassian/ssh-run:0.4.1
variables:
SSH_USER: 'ubuntu'
SERVER: $SERVER_IP
COMMAND: 'cd /home/ubuntu/utags-test/server;docker pull paranjay1/utags-paranjay:dev;docker-compose down;docker-compose up -d'
SSH_KEY: $SERVER_PRIVATE_KEY
DEBUG: 'true'
services:
- docker
error while running pipeline
Build setup13s
pipe: atlassian/scp-deploy:1.2.1
....
....
Digest: sha256:b9111f61b5824ca7ed1cb63689a6da55ca6d6e8985eb778c36a5dfc2ffe776a8
Status: Downloaded newer image for bitbucketpipelines/scp-deploy:1.2.1
INFO: Configuring ssh with default ssh key.
INFO: Adding known hosts...
INFO: Appending to ssh config file private key path
INFO: Applied file permissions to ssh directory.
✔ Deployment finished.
pipe: atlassian/ssh-run:0.4.1
....
....
Digest: sha256:b8ff5416420ef659869bf1ea6e95502b8fa28ccd5e51321e4832d9d81fdefc18
Status: Downloaded newer image for bitbucketpipelines/ssh-run:0.4.1
INFO: Executing the pipe...
INFO: Using passed SSH_KEY
INFO: Executing command on 13.235.33.118
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22 ubuntu#13.235.33.118 bash -c 'cd /utags-test/server;docker pull paranjay1/utags-paranjay:dev;docker-compose down;docker-compose up -d'
Load key "/root/.ssh/pipelines_id": invalid format
Load key "/root/.ssh/pipelines_id": invalid format
ubuntu#13.235.33.118: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
✖ Execution failed.
I already installed docker and docker-compose on my ec2-server
I generated the keys on bitbucket in sshkey section and added bitbucket public key to my authorized_keys file on ec2-server
$SERVER_PRIVATE_KEY contains the ec2-server private key
$SERVER_IP contains my ec2-server public IP
HOW CAN I SOLVE THIS ISSUE and what might be the cause of this error?
atlassian/ssh-run pipe documentation states the alternative SSH_KEY should be base64 encoded. My bet is you missed that info bit.
An base64 encoded alternate SSH_KEY to use instead of the key configured in the Bitbucket Pipelines admin screens (which is used by default). This should be encoded as per the instructions given in the docs for using multiple ssh keys.
Another good question would be: why aren't you using the ssh key provided by the pipeline instead?
You can use repository SSH key, so you won't need to encode it.
bitbucket.com/.../admin/addon/admin/pipelines/ssh-keys
Then remove SSH key variable and it defaultly uses repository ssh key.
you actually don't need to use "SSH_KEY: $SERVER_PRIVATE_KEY" in your pipe. you can use the default keys available in your bitbucket_repo > repository_settings > ssh_key. you can generate a key here. the generated public key should be in the remote server's "/home/ubuntu/.ssh/authorized_key" file. add your remote servers public-IP to the known host and fetch fingerprint.
but if you want to use a different ssh key then you have to add "SSH_KEY: $SERVER_PRIVATE_KEY" in your pipe. where,
$SERVER_PRIVATE_KEY - local machine's private_IP encoded to base64
you have to use $base64 -w 0 < my_ssh_key command to encode your key to base64

git-secret: gpg: [don't know]: partial length invalid for packet type 20 in the gitlab ci job

I have a trouble with git secret in the gitlab ci jobs.
What I done:
init, add users, add files, hide them using git secret
create a job where I want to reveal files:
git secret:
stage: init
before_script:
- sh -c "echo 'deb https://gitsecret.jfrog.io/artifactory/git-secret-deb git-secret main' >> /etc/apt/sources.list"
- wget -qO - 'https://gitsecret.jfrog.io/artifactory/api/gpg/key/public' | apt-key add -
- apt-get update && apt-get install -y git-secret
script:
- echo $GPG_PRIVATE_KEY | tr ',' '\n' > ./pkey.gpg
- export GPG_TTY=$(tty)
- gpg --batch --import ./pkey.gpg
- git secret reveal -p ${GPG_PASSPHRASE}
Result logs:
...
$ gpg --batch --import ./pkey.gpg
gpg: directory '/root/.gnupg' created
gpg: keybox '/root/.gnupg/pubring.kbx' created
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key SOMEKEY: public key "Email Name <ci#email.com>" imported
gpg: key SOMEKEY: secret key imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: secret keys read: 1
gpg: secret keys imported: 1
$ git secret reveal -p ${GPG_PASSPHRASE}
gpg: [don't know]: partial length invalid for packet type 20
git-secret: abort: problem decrypting file with gpg: exit code 2: /path/to/decrypted/file
I don't understand where the problem. What mean packet type 20? And length of what?
Locally it revealed fine. Command git secret whoknows shows that email on the ci env can decrypt. Passphrase checked and passed to the job.
For me, the problem was the GnuPG versions being different between the encryption machine (v2.3) and the decryption side (v2.2).
After I downgraded it to v2.2 (due to v2.3 not yet being available on Debian), the problem went away.
This is a common problem with the format of the keys.
Since you're using GitLab CI, you should get advantage of the File type in the CI/CD Variables instead of storing the value of the GPG Key as a Variable type.
First of all, forget about generating the armor in one line with the piped | tr '\n' ',' and get the proper multiline armor.
Second, add it to your GitLab CI Variables with type "File", add an empty line at the end and then delete it (this seems stupid but will save you headaches, since it seems to be a problem when copying directly from the shell to the textbox in GitLab).
Third, import directly the file in your keychain:
gpg --batch --import $GPG_PRIVATE_KEY

Why does importing the GPG key for the mono repo fail?

When following the steps to setup mono on the following site it is failing to import the GPG key for the repo.
https://www.mono-project.com/download/stable/#download-lin-centos
This is happening on CentOS machines running both 6.x and 7.x.
rpm --import "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF"
error: https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF: key 1 not an armored public key.
This appears to be due to a missing newline at the end of the key file. If you open the key with vi and save it, without making any changes (this is one way to ensure there is a newline at the end of the file), the import works.
curl -v "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF" -okey
vi key
# don't modify, just save it with ":wq"
rpm --import key
Another way to add the newline to the end of the file: https://unix.stackexchange.com/a/31955
sed -i -e '$a\' key
see https://github.com/mono/mono/issues/15955
I used this workaround to then download. See https://github.com/mono/mono/issues/16025
rpm --import https://download.mono-project.com/repo/xamarin.gpg
su -c 'curl https://download.mono-project.com/repo/centos7-stable.repo | tee /etc/yum.repos.d/mono-centos7-stable.repo’

ssh-copy-id installed through Homebrew errors when trying to copy an SSH key

I installed ssh-copy-id through Homebrew.
When I type ssh-copy-id -i mykey.pub [path to remote] I get the following error:
/usr/local/bin/ssh-copy-id: ERROR: failed to open ID file './mykey': No such file or directory
It appears that it's not finding the key because the regex is cutting off the .pub. What am I doing wrong?
It turns out that ssh-copy-id checks whether there is a valid private key in the same directory as the public key it's uploading.
I was uploading someone else's SSH key so they could access a server. I don't have their private key on my machine, which is why the error occurred.
One option is to just manually remove that check from the script, but it's hacking the Homebrew code.
My solution was to run touch mykey to create a blank file using the filename syntax of the corresponding private key to the public key I was uploading (mykey.pub).
Thanks for your explanation. I have resolved the issue by following these simple steps
[ceph#monitor ~]$ ssh-keygen -t rsa
[ceph#monitor ~]$ ssh-copy-id ceph#osd-0

Keychain Working but Still must enter passphrase on first decrypt

I am using keychain to store ssh and gpg keys. When I login and start up a terminal, I get prompted for both the ssh and gpg passphrases, then keychain reports that it has found the existing agents and keys:
keychain 2.7.1 ~ http://www.funtoo.org
No other ssh-agent(s) than keychain's 2740 found running
No other gpg-agent(s) than keychain's 3301 found running
Found existing ssh-agent: 2740
Found existing gpg-agent: 3301
Known ssh key: /home/ded/.ssh/id_rsa
Known gpg key: C0A9F2F0
But if I try to decrypt a gpg file, say
$ gpg -d ~/.authinfo.gpg
I am prompted again for the gpg passphrase, but only the first time. Decrypting again, even from a new terminal works fine. This means that emacs gnus, for example, fails to connect unless I first do an manual decrypt. Very annoying.
I would like to enter the passphrases once when I login.
Here is what I have in my zshrc (also bashrc) to start-up keychain:
if [[ $- == *i* ]]; then
eval `keychain --eval id_rsa C0A9F2F0 --inherit any-once --stop others --nogui`
GPG_TTY=$(tty)
export GPG_TTY
else
# In a non-interactive script, eval keychain, but don't try to
# prompt for passphrase
eval `keychain --eval id_rsa C0A9F2F0 7BBA874D --inherit any-once --stop others --quiet --noask`
fi
Here is my ~/.gnupg/gpg-agent.conf
pinentry-program /usr/bin/pinentry-curses
# Time to live for gpg keys set
# 864000 is 10 days; max set to 100 days
max-cache-ttl 8640000
default-cache-ttl 864000
max-cache-ttl-ssh 8640000
# Use gpg-agent to serve SSH keys as well
# enable-ssh-support
# default-cache-ttl-ssh 864000
log-file /home/ded/gpg-agent.log
debug 4
Any ideas?