connect bitbucket pipeline to cpanel with API keys - cpanel

How do I use SSH Keys (created from cPanel) to connect to the server? And eventually pull a fresh copy and run composer updates and database migrations (a Symfony script)
I get permission denied errors so my ssh example.net.au ls -l /staging.example.net.au is reaching the server, I'm just unsure how to use keys made from cPanel to make an authentication.
bitbucket-pipelines.yml
# This is an example Starter pipeline configuration
# Use a skeleton to build, test and deploy using manual and parallel steps
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: atlassian/default-image:2
pipelines:
default:
- parallel:
- step:
name: 'Build and Test'
script:
- echo "Your build and test goes here..."
- step:
name: 'Lint'
script:
- echo "Your linting goes here..."
- step:
name: 'Security scan'
script:
- echo "Your security scan goes here..."
# The following deployment steps will be executed for each pipeline run. To configure your steps and conditionally deploy see https://support.atlassian.com/bitbucket-cloud/docs/configure-bitbucket-pipelinesyml/
- step:
name: 'Deployment to Staging'
deployment: staging
script:
- echo "Your deployment to staging script goes here..."
- echo $TESTVAR
- ssh example.net.au ls -l /staging.example.net.au
- step:
name: 'Deployment to Production'
deployment: production
trigger: 'manual'
script:
- echo "Your deployment to production script goes here..."

I think your SSL set-up may be incorrect. Please try the following to ensure both servers trust each other:
==Part 1==
Step 1. SSH into cPanel server (use PuTTY or your preferred SSH client), and run the following commands to generate a new key:
ssh-keygen
eval $(ssh-agent)
ssh-add
cat ~/.ssh/id_rsa.pub
Step 2. Copy the resulting key from the 'cat' command above, into: Bitbucket -> your repo -> Settings -> Access keys
==Part 2==
Step 3. In Bitbucket, go to your repo -> settings -> SSH keys -> Generate key
Step 4. Back on your cPanel server's SSH connection, copy the key from Step 3 above into the authorized keys file. Save when you are done:
nano ~/.ssh/authorized_keys
Right click to paste (usually)
CNRL+O to save
CNRL+X to exit
Step 5. In the same Bitbucket screen from Step 3, fetch and add host's fingerprint. You will need to enter the URL or IP address of your cPanel server here. Some cPanels servers use non-default ports. If port 22 is not the correct port, be sure to specify like so:
example.com:2200
(Port 443 is usually reserved for HTTPS and it is unlikely the correct port for an SSH connection. If in doubt, try the default 22 and common alternative 2200 ports first.)
Let me know if you have any questions and I am happy to assist you further.

Related

"Host key verification failed" error when running GitHub Actions on self-hosted runner (Windows 10)

I'm trying to run a simple GitHub Action on my self-hosted runner (Windows 10), but I'm getting the error Host key verification failed. [error]fatal: Could not read from remote repository. Here's the code for the GitHub Action:
name: GitHub Actions Demo
on:
push:
branches: ["feature"]
jobs:
build:
runs-on: self-hosted
steps:
- name: Check out repository code
uses: actions/checkout#v3
I've verified that the self-hosted runner is properly configured and connected to the repository, and I can manually clone and fetch the repository on the same machine without any issues. I've also tried running the ssh-keyscan command and adding the resulting host key to the known_hosts file, but that doesn't solve the problem.
Instead of running directly the checkout action, try first running a
steps:
- name: Test SSH access
run: ssh -Tv git#github.com
The is to see which key is communicated, and it the account used is the same as the one you are with, when you do your manual test (when the clone/fetch is working).
The OP ysief-001 then sees (in the comments)
After 1h30m I cancelled the workflow.
The last two lines are
Found key in C:\\Users\\ysief/.ssh/known_hosts:4
read_passphrase: can't open /dev/tty: No such file or directory
That simply means a passphrase-protect (IE: encrypted) private key is not supported.
You need one without passphrase. (Or you can remove the passphrase from your existing key)

Unable to connect from bitbucket pipelines to shared hosting via ssh

What I need to do is to SSH public server (which is shared hosting) and run a script that starts the deployment process.
I followed what's written here:
I've created a key pair in Settings > Pipelines > SSH Keys
Then I've added the IP address of the remote server
Then I've appended the public key to the remote server's ~/.ssh/authorized_keys file
When I try to run this pipeline:
image: img-name
pipelines:
branches:
staging:
- step:
deployment: Staging
script:
- ssh remote_username#remote_ip:port ls -l
I have the following error:
Could not resolve hostname remote_ip:port: Name or service not known
Please help!
The SSH command doesn't take the ip:port syntax. You'll need to use a different format:
ssh -p port user#remote_ip "command"
(This assumes that your remote_ip is publicly-accessible, of course.)

Basic Delivery Using SSH In Bitbucket Pipelines

Here's what I've got so far:
I've generated an SSH key pair inside my repo and also added the public key to my ~/.ssh/authorized_keys on the remote host.
My remote host has root user and password login disabled for security. I put the SSH username I use to log in manually inside an environment variable called SSH_USERNAME.
Here's where I'm just not sure what to do. How should I fill out my bitbucket-pipelines.yml?
Here is the raw contents of that file... What should I add?
# This is a sample build configuration for JavaScript.
# Check our guides at https://confluence.atlassian.com/x/14UWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: samueldebruyn/debian-git
pipelines:
branches:
master:
- step:
script: # Modify the commands below to build your repository.
- sftp $FTP_USERNAME#192.241.216.482
First of all: you should not add a key pair to your repo. Credentials should never be in a repo.
Defining the username as an environment variable is a good idea. You should do the same with the private key of your keypair. (But you have to Bas64-encode it – see Bb Pipelines documentation – and mark it as secure, so it is not visible in the repo settings.)
Then, before you actually want to connect, you have to make sure the private key (of course, Base64-decoded) is known to your pipeline’s SSH setup.
Basically, what you need to do in your script (either directly or in a shell script) is:
- echo "$SSH_PRIVATE_KEY" | base64 --decode > ~/.ssh/id_rsa
- chmod go-r ~/.ssh/id_rsa
BTW, I’d suggest also putting also the host’s IP in an env variable.

Is it possible to add an ssh key to the agent for a private repo in an ansible playbook?

I am using Ansible to provision a Vagrant environment. As part of the provisioning process, I need to connect from the currently-provisioning VM to a private external repository using an ssh key in order to use composer to pull in modules for an application. I've done a lot of reading on this before asking this question, but still can't seem to comprehend what's going on.
What I want to happen is:
As part of the playbook, on the Vagrant VM, I add the ssh key to the private repo to the ssh-agent
Using that private key, I am then able to use composer to require modules from the external source
I've read articles which highlight specifying the key in playbook execution. (E.g. ansible-play -u username --private-key play.yml) As far as I understand, this isn't for me, as I'm calling the playbook via Vagrant file. I've also read articles which mention ssh forwarding. (SSH Agent Forwarding with Ansible). Based on what I have read, this is what I've done:
On the VM being provisioned, I insert a known_hosts file which consists of the host entries of the machines which house the repos I need:
On the VM being provisioned, I have the following in ~/.ssh/config:
Host <VM IP>
ForwardAgent yes
I have the following entries in my ansible.cfg to support ssh forwarding:
[defaults]
transport = ssh
[ssh_connection]
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r
[privilege_escalation]
pipelining = False
I have also added the following task to the playbook which tries to
use composer:
- name: Add ssh agent line to sudoers
become: true
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
I exit the ansible provisioner and add the private key on the provisioned VM to the agent via a shell provisioner (This is where I suspect I'm going wrong)
Then, I attempt to use composer, or call git via the command module. Like this, for example, to test:
- name: Test connection
command: ssh -T git#github.com
Finally, just in case I wasn't understanding ssh connection forwarding correctly, I assumed that what was supposed to happen was that I needed to first add the key to my local machine's agent, then forward that through to the provisioned VM to use to grab the repositories via composer. So I used ssh-add on my local machine before executing vagrant up and running the provisioner.
No matter what, though, I always get permission denied when I do this. I'd greatly appreciate some understanding as to what I may be missing in my understanding of how ssh forwarding should be working here, as well as any guidance for making this connection happen.
I'm not certain I understand your question correctly, but I often setup machines that connect to a private bitbucket repository in order to clone it. You don't need to (and shouldn't) use agent forwarding for that ("ssh forwarding" is unclear; there's "authentication agent forwarding" and "port forwarding", but you need neither in this case).
Just to be clear with terminology, you are running Ansible in your local machine, you are provisioning the controlled machine, and you want to ssh from the controlled machine to a third-party server.
What I do is I upload the ssh key to the controlled machine, in /root/.ssh (more generally $HOME/.ssh where $HOME is the home directory of the controlled machine user who will connect to the third-party server—in my case that's root). I don't use the names id_rsa and id_rsa.pub, because I don't want to touch the default keys of that user (these might have a different purpose; for example, I use them to backup the controlled machine). So this is the code:
- name: Install bitbucket aptiko_ro ssh key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa
mode: 0600
content: "{{ aptiko_ro_ssh_key }}"
- name: Install bitbucket aptiko_ro ssh public key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa.pub
content: "{{ aptiko_ro_ssh_pub_key }}"
Next, you need to tell the controlled machine ssh this: "When you connect to the third-party server, use key X instead of the default key, and logon as user Y". You tell it in this way:
- name: Install ssh config that uses aptiko_ro keys on bitbucket
copy:
dest: /root/.ssh/config
content: |
Host bitbucket.org
IdentityFile ~/.ssh/aptiko_ro_id_rsa
User aptiko_ro

SSH public key log in suddenly stopped working (CENTOS 6)

I was testing a jenkins build job in which I was using ansible to scp a tarball to a number of servers. Below is the ansible yaml file:
- hosts: websocket_host
user: root
vars:
tarball: /data/websocket/jenkins/deployment/websocket_host/websocket.tgz
deploydir: /root
tasks:
- name: copy build to websocket server
action: copy src=$tarball dest=$deploydir/websocket.tgz
- name: untar build on websocket server
action: command tar xvfz $deploydir/websocket.tgz -C $deploydir
- name: restart websocket server
action: command /root/websocket/bin/websocket restart
The first two commands worked successfully with command /root/websocket/bin/websocket restart failing. I have since been able to log in (without a password) to any of the servers defined in my ansible host file for websocket_host. I have verified that all my permissions settings are correct on both the host and client machines. I have tested this from several client machines and they all now require me to enter a password to ssh. Yesterday I was able to ssh (via my public key) no problem. I am using the root user on the host machines and wonder if copying files to the /root directory caused this issue as it was the last command I was able to successfully run via a passwordless ssh session.
Turns out the Jenkins job changed ownership and group of my /root directory. The command: chown root.root /root fixes everything.