gitlab launch shell pipelines with specific user - ssh

gitlab_version: 14.1.1-ee
I copied ssh-id to targuet host with the user "user".
When I create a pipeline with "shell" runner I have a problems to comunicate to other hosts by ssh.
I created a simple task with "id" command, this return "root".
How to launch gitlab pipelines with "user" user.
Thank's.

I found how to connect to linux target host with "other user" than "runner user".
Procedure:
Install sshpass on gitlab server host.
On Project_name/settings/"CI/CD"/Variable, create a pass_var with password
string on key value.
On pipeline/editor in script line of .gitlab-ci.yml:
script:
- sshpass -p $pass_var ssh user#host_ip "hostname"

Related

Ansible unable to create folder on localhost with different user

I'm executing ansible playbook with appuser whereas I wish to create folder with user webuser on localhost.
ssh keys are setup for webuser on my localhost. So after login with appuser I can simply ssh webuser#localhost to switch user to webuser.
Note: I do not have sudo priveledges so I cannot sudo to switch to webuser from appuser.
Below is my playbook that is run with user appuser but needs to create a folder 04May2020 on localhost using webuser
- name: "Play 1"
hosts: localhost
remote_user: "webuser"
vars:
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
ansible_ssh_private_key_file: /app/misc_automation/ssh_keys_id_rsa
tasks:
- name: create folder for today's print
file:
path: "/webWeb/htdocs/print/04May2020"
state: directory
remote_user: webuser
However, the output shows that the folder is created with appuser instead of webuser. See output showing ssh connectivity with appuser instead of webuser.
ansible-playbook /app/Ansible/playbook/print_oracle/print.yml -i /app/Ansible/playbook/print_oracle/allhosts.hosts -vvv
TASK [create folder for today] ***********************************
task path: /app/Ansible/playbook/print_oracle/print.yml:33
Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: appuser
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 && sleep 0'
Can you please suggest if it is possible without sudo?
Putting all my comments together in a comprehensive answer.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: appuser
This is indicating that you are connecting to localhost through the local connection plugin, either because you explicitelly re-declared the host as such or because you are using the implicit localhost. From discussions, you are in the second situation.
When using the local connection plugin, as indicated in the above documentation, the remote_user is ignored. Trying to change the user has no effect as you can see in the below test run (user (u)ids changed):
# Check we are locally running as user1
$ id -a
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
# Running the same command through ansible returns the same result
$ ansible localhost -a 'id -a'
localhost | CHANGED | rc=0 >>
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
# Trying to change the remote user has no effect
$ ansible localhost -u whatever -a 'id -a'
localhost | CHANGED | rc=0 >>
uid=xxxx(user1) gid=yyy(group1) groups=yyy(group1)
Without changing your playbook and/or inventory, the only solution is to launch the playbook as the user who needs to create the directory.
Since you have ssh available, an other solution is to declare a new host that you will use only for this purpose, which will target the local IP through ssh. (Note: you can explicitly declare localhost like this but then all connections will go through ssh which might not be what you want to do).
Somewhere at the top of you inventory, add the line:
localssh ansible_host=127.0.0.1
And in your playbook, change
hosts: localssh
Now the connection to your local machine will go through ssh and the remote_user will be obeyed correctly.
One way you can try is by setting the ansible_connection to localhost. To do this, in the directory from which you are running ansible commands, create a host_vars directory. In that sub-directory, create a file named localhost, containing the line ansible_connection: smart

Unable to connect from bitbucket pipelines to shared hosting via ssh

What I need to do is to SSH public server (which is shared hosting) and run a script that starts the deployment process.
I followed what's written here:
I've created a key pair in Settings > Pipelines > SSH Keys
Then I've added the IP address of the remote server
Then I've appended the public key to the remote server's ~/.ssh/authorized_keys file
When I try to run this pipeline:
image: img-name
pipelines:
branches:
staging:
- step:
deployment: Staging
script:
- ssh remote_username#remote_ip:port ls -l
I have the following error:
Could not resolve hostname remote_ip:port: Name or service not known
Please help!
The SSH command doesn't take the ip:port syntax. You'll need to use a different format:
ssh -p port user#remote_ip "command"
(This assumes that your remote_ip is publicly-accessible, of course.)

Is it possible to add an ssh key to the agent for a private repo in an ansible playbook?

I am using Ansible to provision a Vagrant environment. As part of the provisioning process, I need to connect from the currently-provisioning VM to a private external repository using an ssh key in order to use composer to pull in modules for an application. I've done a lot of reading on this before asking this question, but still can't seem to comprehend what's going on.
What I want to happen is:
As part of the playbook, on the Vagrant VM, I add the ssh key to the private repo to the ssh-agent
Using that private key, I am then able to use composer to require modules from the external source
I've read articles which highlight specifying the key in playbook execution. (E.g. ansible-play -u username --private-key play.yml) As far as I understand, this isn't for me, as I'm calling the playbook via Vagrant file. I've also read articles which mention ssh forwarding. (SSH Agent Forwarding with Ansible). Based on what I have read, this is what I've done:
On the VM being provisioned, I insert a known_hosts file which consists of the host entries of the machines which house the repos I need:
On the VM being provisioned, I have the following in ~/.ssh/config:
Host <VM IP>
ForwardAgent yes
I have the following entries in my ansible.cfg to support ssh forwarding:
[defaults]
transport = ssh
[ssh_connection]
ssh_args=-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r
[privilege_escalation]
pipelining = False
I have also added the following task to the playbook which tries to
use composer:
- name: Add ssh agent line to sudoers
become: true
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"
I exit the ansible provisioner and add the private key on the provisioned VM to the agent via a shell provisioner (This is where I suspect I'm going wrong)
Then, I attempt to use composer, or call git via the command module. Like this, for example, to test:
- name: Test connection
command: ssh -T git#github.com
Finally, just in case I wasn't understanding ssh connection forwarding correctly, I assumed that what was supposed to happen was that I needed to first add the key to my local machine's agent, then forward that through to the provisioned VM to use to grab the repositories via composer. So I used ssh-add on my local machine before executing vagrant up and running the provisioner.
No matter what, though, I always get permission denied when I do this. I'd greatly appreciate some understanding as to what I may be missing in my understanding of how ssh forwarding should be working here, as well as any guidance for making this connection happen.
I'm not certain I understand your question correctly, but I often setup machines that connect to a private bitbucket repository in order to clone it. You don't need to (and shouldn't) use agent forwarding for that ("ssh forwarding" is unclear; there's "authentication agent forwarding" and "port forwarding", but you need neither in this case).
Just to be clear with terminology, you are running Ansible in your local machine, you are provisioning the controlled machine, and you want to ssh from the controlled machine to a third-party server.
What I do is I upload the ssh key to the controlled machine, in /root/.ssh (more generally $HOME/.ssh where $HOME is the home directory of the controlled machine user who will connect to the third-party server—in my case that's root). I don't use the names id_rsa and id_rsa.pub, because I don't want to touch the default keys of that user (these might have a different purpose; for example, I use them to backup the controlled machine). So this is the code:
- name: Install bitbucket aptiko_ro ssh key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa
mode: 0600
content: "{{ aptiko_ro_ssh_key }}"
- name: Install bitbucket aptiko_ro ssh public key
copy:
dest: /root/.ssh/aptiko_ro_id_rsa.pub
content: "{{ aptiko_ro_ssh_pub_key }}"
Next, you need to tell the controlled machine ssh this: "When you connect to the third-party server, use key X instead of the default key, and logon as user Y". You tell it in this way:
- name: Install ssh config that uses aptiko_ro keys on bitbucket
copy:
dest: /root/.ssh/config
content: |
Host bitbucket.org
IdentityFile ~/.ssh/aptiko_ro_id_rsa
User aptiko_ro

Ansible prompts password when using synchronize

I'm using ansible in the following way:
ansible-playbook -f 1 my-play-book.yaml --ask-pass --ask-sudo-pass
After this I'm asked to enter the ssh & sudo passwords (same password for both).
Inside my playbook file I'm using synchronize task:
synchronize: mode=push src=rel/path/myfolder/ dest=/abs/path/myfolder/
For each host, I'm prompted to enter the ssh password of the remote host (the same that I entered in the beginning of the playbook run)
How can I avoid entering the password when executing synchronize task?
If you have setup the ssh keys correctly on the <host>, then the following should work.
ansible all -m synchronize -a "mode=push src=rel/path/myfolder/ dest=/abs/path/myfolder/" -i <host>, -vvv
I was able to get the above working without any password prompt.

SSH public key log in suddenly stopped working (CENTOS 6)

I was testing a jenkins build job in which I was using ansible to scp a tarball to a number of servers. Below is the ansible yaml file:
- hosts: websocket_host
user: root
vars:
tarball: /data/websocket/jenkins/deployment/websocket_host/websocket.tgz
deploydir: /root
tasks:
- name: copy build to websocket server
action: copy src=$tarball dest=$deploydir/websocket.tgz
- name: untar build on websocket server
action: command tar xvfz $deploydir/websocket.tgz -C $deploydir
- name: restart websocket server
action: command /root/websocket/bin/websocket restart
The first two commands worked successfully with command /root/websocket/bin/websocket restart failing. I have since been able to log in (without a password) to any of the servers defined in my ansible host file for websocket_host. I have verified that all my permissions settings are correct on both the host and client machines. I have tested this from several client machines and they all now require me to enter a password to ssh. Yesterday I was able to ssh (via my public key) no problem. I am using the root user on the host machines and wonder if copying files to the /root directory caused this issue as it was the last command I was able to successfully run via a passwordless ssh session.
Turns out the Jenkins job changed ownership and group of my /root directory. The command: chown root.root /root fixes everything.