Capistrano Deploy asks for Username for Git but moves to new line before I can enter it - ruby-on-rails-3

When deploying my rail 3 app with Capistrano It gets to the step where capistrano executes this command:
* executing "if [ -d /var/www/appname/shared/cached-copy ]; then cd /var/www/dflabs1/shared/cached-copy && git fetch -q origin && git fetch --tags -q origi
n && git reset -q --hard d0a1373a3634935de1a75f377698ba53574fe580 && git clean -q -d -x -f; else git clone -q https://github.com/username/dflabs1.git /va
r/www/appname/shared/cached-copy && cd /var/www/appname/shared/cached-copy && git checkout -q -b deploy d0a1373a3634935de1a75f377698ba53574fe580; fi"
servers: ["11.10.1.162"]
Password:
[11.10.1.162] executing command
** [11.10.1.162 :: out] Username for 'https://github.com':
The problem is that when it outputs "Username for 'https://github.com'" , the cursor jumps to a new line without letting me enter the username. If I try and enter the username on the new line the deploy just does nothing. This is happen on an Ubuntu 12.04 desktop to an Ubuntu 12.04 server.
I tried adding the 'set :scm_username' option to deploy.rb but that had no effect. I tried in the Ubuntu terminal and in the Terminal view inside Aptana.

Try to import the SSH-Key of your Deployment-Server into GitHub.
You can do this here: SSH Keys.
You can find the tutorial for SSH-Keys (Github) here: Generating SSH Keys
[EDIT]
Actually I just did it myself and checked the output.
Are you sure you set your set :repository, "git#github.com:[GitHub-User]/[GitHub-Repository-Name].git"in the deploy.rb correctly?
because your
else git clone -q https://github.com/username/dflabs1.git
looks here like
else git clone -q git#github.com:[GitHub-User]/[GitHub-Repository-Name].git

Related

Why Molecule is not able to start a docker container (Failed to create temporary directory)

I found similar case here, that I am using molecule to test my ansible roles, but for some reason it is skipping "creation" part and gives error like:
fatal: [rabbitmq]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" && echo ansible-tmp-1638541586.6239848-828-250053975102429=\"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" ), exited with result 1", "unreachable": true}
It is skipping the create process: Skipping, instances already created. However, nothing is running:
name#EEW00438:~/.cache$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
name#EEW00438:~/.cache$
what I tried:
molecule destroy
molecule reset
restart
rm -rf ~/.cache/
changed remote_tmp to /tmp/.ansible/ in /etc/ansible/ansible.cfg
reinstall molecule
This issue is only with one role.
UPDATE:
it is failing on step:
mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623 `\" && echo ansible-tmp-1638782939.31706-2913-12516475286623=
mkdir: cannot create directory ‘"/home/user/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623"’: No such file or directory
I stumbled upon this issue as well.
When you create the role you need to create it as molecule init role --driver-name docker ns.myrole to enable docker. Be sure to install the docker driver too if you haven't pip install --upgrade molecule-docker
So if you need to tweak the container that runs, edit molecule.yml. It defaults to centos. I switched to ubuntu in there, an created a Dockerfile to provision the container with things that need to exist.
molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
platforms:
- name: instance
image: ubuntu:22.04 # this is required but ignored since I specify a `dockerfile`
pre_build_image: false
dockerfile: Dockerfile
provisioner:
name: ansible
verifier:
name: ansible
For example, Ubuntu 22.04 doesn't use python anymore, so I added an alias at the end of what molecule renders so that Ansible can use python and have it redirect to python3
FROM ubuntu:22.04
RUN if [ $(command -v apt-get) ]; then export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y python3 sudo bash ca-certificates iproute2 python3-apt aptitude && apt-get clean && rm -rf /var/lib/apt/lists/*; \
elif [ $(command -v dnf) ]; then dnf makecache && dnf --assumeyes install /usr/bin/python3 /usr/bin/python3-config /usr/bin/dnf-3 sudo bash iproute && dnf clean all; \
elif [ $(command -v yum) ]; then yum makecache fast && yum install -y /usr/bin/python /usr/bin/python2-config sudo yum-plugin-ovl bash iproute && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \
elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python3 sudo bash iproute2 && zypper clean -a; \
elif [ $(command -v apk) ]; then apk update && apk add --no-cache python3 sudo bash ca-certificates; \
elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python3 sudo bash ca-certificates iproute2 && xbps-remove -O; fi
RUN echo 'alias python=python3' >> ~/.bashrc
It's been years since I last used Molecule, and I must say... it's gone downhill. It used to be easy/clear/direct to get things working. Sigh. I guess I should stick to containers and force the migration off VMs sooner!
The problem may be caused by a Docker context change performed at the start of Docker Desktop. Despite this, Molecule does create a container, but in an inactive context.
At startup, Docker Desktop automatically switches the context from default to desktop-linux [1]. The active context determines which containers are available from CLI.
The context cannot be set in the molecule, i.e. the default context is always used to create containers [2].
$ molecule create --scenario-name test
... # The output with the error is skipped because it duplicates the output from the question
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
desktop-linux * moby unix:///home/bkarpov/.docker/desktop/docker.sock
$ docker context use default
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a71bfd28992f geerlingguy/docker-ubuntu2004-ansible "bash -c 'while true…" 5 minutes ago Up 5 minutes some-instance
$ molecule login --scenario-name test
INFO Running test > login
root#some-instance:/#
Solutions
Switch the context back to default manually
docker context use default
This solution is suitable for one-time execution, since the context will need to be switched every time Docker Desktop is started. Docker Desktop service will continue to work using the desktop-linux context.
Issue with the request to add context switching to Docker Desktop - https://github.com/docker/roadmap/issues/47
Stop Docker Desktop
systemctl --user stop docker-desktop
Stopping the Docker Desktop service will automatically switch to the default context.
Set DOCKER_CONTEXT so that Docker Desktop does not change the context in the current shell
export DOCKER_CONTEXT=default
systemctl --user restart docker-desktop
When stopping, the context returns to default, and when starting, it does not switch to desktop-linux.
References
https://docs.docker.com/desktop/install/ubuntu/#launch-docker-desktop
https://github.com/ansible-community/molecule-docker#faq

rsync not finding local directory when sending through SSH on pipeline

Using bitbucket pipelines to push to our remote from the build process that you get from the pipeline.
This is a snippet of the bitbucket-pipelines.yml file
- pipe: atlassian/ssh-run:0.2.2
variables:
SSH_USER: $PRODUCTION_USER
SERVER: $PRODUCTION_SERVER
COMMAND: '''rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:home/$PRODUCTION_USER'''
PORT: '22007'
The connection itself works, and it does run the command correctly once it is remoted onto the server...
INFO: Executing the pipe...
INFO: Using default ssh key
INFO: Executing command on {HOST}
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}'
bash: rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}: No such file or directory
Connection to {HOST} closed.
I've tried to run the same command locally from the directory on my machine
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 "$PWD" {USER}#{HOST}:/home/{USER}'
but it just duplicates the home directory on the remote.
It looks to me like it's looking for the source directory on the server and not looking at the docker container from bitbucket (or the files on my local machine with pwd).
If I try to run the command without the '' then it fails because it's using port 22 by default. I've also tried offsetting the command into a bash script and using MODE: 'Script' which is an acceptable pattern for the plugin, but I can't use my environment variables in the sh file.
If all you wan't to do is to copy the files from the pipeline to the production server, you should you the rsync-deploy pipe, instead of the ssh-run. Your pipe configuration is gonna look pretty much like the following:
script:
- pipe: atlassian/rsync-deploy:0.3.2
variables:
USER: $PRODUCTION_USER
SERVER: $PRODUCTION_USER
REMOTE_PATH: 'home/$PRODUCTION_USER'
LOCAL_PATH: 'build'
SSH_PORT: '22007'
Make sure to configure your SSH keys in pipelines properly (here is a link to our docs for configuring SSH keys https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html)
I've found another way around this instead of needing a plugin, instead I'm running an rsync as a script step
image: atlassian/default-image:latest
- rsync -rltDvzCh --max-delete=0 --stats --exclude-from=excludes -e 'ssh -e none -p 22007' $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:/home/$PRODUCTION_USER
It seems the -e none is an important addition, as is loading in the atlassian image, as fails to find the rsync function, otherwise. I found this info on this post on Atlassian Community.
This seems to work pretty well for me
image: node:10.15.3
pipelines:
default:
- step:
name: <project-path>
script:
- apt-get update && apt-get install -y rsync
- ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts
- cd $BITBUCKET_CLONE_DIR
- rsync -r -v -e ssh . $SSH_USER#$SSH_HOST:/<project-path>
- ssh $SSH_USER#$SSH_HOST 'cd <project-path> && npm install'
- ssh $SSH_USER#$SSH_HOST 'pm2 restart 0'
Note: Avoid using sudo cmd in pipeline scripts
same issue with atlassian/default-image:3
rsync -azv ./project_path/*
bash: rsync: command not found
Solution:
apt-get update && apt-get install -y rsync

how to execute git commands in gitlab-ci scripts

I want to change a file and commit changes inside a gitlab-ci pipeline
I tried writing normal git commands in script
script:
- git clone git#gitlab.url.to.project.git
- cd project file
- touch test.txt
- git config --global user.name "${GITLAB_USER_NAME}"
- git config --global user.email "${GITLAB_USER_EMAIL}"
- git add .
- git commit -m "testing autocommit"
- git push
I get cannot find command git or something along those lines, I know it has something to do with tags, but if I try add a git tag it says no active runner. anyone has an idea how to run git commands on gitlab-ci ?
First you need to make sure you can actually use git, so either run your jobs on a shell executor located on a system that has git or use a docker executor and use an image that has git installed.
Next problem you will encounter is that you can't push to Git(lab) since you can't enter credentials.
So the solution is to create a ssh keypair and load the ssh private key into your CI environment through CI/CD variables, also add the corresponding public key to you your Git(lab) account.
Source: https://about.gitlab.com/2017/11/02/automating-boring-git-operations-gitlab-ci/
Your .gitlab-ci.yml will then look like this:
job-name:
stage: touch
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$GIT_SSH_PRIV_KEY")
- git config --global user.name "${GITLAB_USER_NAME}"
- git config --global user.email "${GITLAB_USER_EMAIL}"
- mkdir -p ~/.ssh
- cat gitlab-known-hosts >> ~/.ssh/known_hosts
script:
- git clone git#gitlab.url.to.project.git
- cd project file
- touch test.txt
- git add .
- git commit -m "testing autocommit"
- git push
Gitlab CI/CD will clone the repository inside the running job automatically. What you need is the git command installed. You could use bitnami/git image to run the job in a container having the command installed.
This worked for me (trying to verify if a tag is available):
tag-available:
stage: .pre
image: bitnami/git:2.37.1
script:
# list all tags
- git tag -l
# check existence of tag "v1.0.0"
- >
if [ $(git tag -l "v1.0.0") ]; then
echo "yes"
else
echo "no."
If you need to authorize the job agains some gitlab registry (or api) please note that there are some predefined variables for user and password (tokens). For this kind of actions you might be most interested in these variables:
$CI_REGISTRY_PASSWORD
$CI_REGISTRY_USER
$CI_REGISTRY
$CI_REPOSITORY_URL
$CI_DEPLOY_PASSWORD
$CI_DEPLOY_USER
The CI_DEPLOY_USER must have a deploy user created and named "gitlab-deploy-token" to have his password loaded in the CI/CD. Read here more.

Add Ruby SDK from Docker container as a remote SDK on RubyMine

Rubymine has options to add remote sdks using Vagrant and SSH, however I decided to go with Docker. I already created a Ruby container, but I don't know how to enable SSH access to it so Rubymine can set it as the remote SDK.
Is it possible?
Tried to follow this article, but the Ruby image doesn't have yum and this package epel-release is for Fedora/RedHat.
Hey are you using this official Ruby docker image?
If so, it's based on Debian and you'll have to use apt-get to install packages.
Here's a handy script for installing openssh-server and configuring a user in a Dockerfile:
FROM ruby:2.1.9
#======================
# Install OpenSSH server (sshd)
#======================
RUN apt-get update -qqy \
&& apt-get -qqy install \
openssh-server \
&& echo "PidFile ${RUN_DIR}/sshd.pid" >> /etc/ssh/sshd_config \
&& sed -i 's|session required pam_loginuid.so|session optional pam_loginuid.so|g' /etc/pam.d/sshd \
&& mkdir -p /var/run/sshd \
&& rm -rf /var/lib/apt/lists/*
# Add user rubymine with password rubymine and give ownership of rubymine home dir
RUN adduser --quiet rubymine \
&& echo "rubymine:rubymine" | chpasswd \
&& chown -R rubymine:rubymine /home/rubymine \
EXPOSE 22
I'm not sure of what are the exact configurations you can perform with Rubymine. But it's possible to open a tty with the container without the need of ssh:
#run it as a daemon
docker run -d --name=myruby ruby:2.19
#connect to it
docker -it exec myruby /bin/bash
UPDATE:
Try setting DOCKER_HOST environment variable to listen on a tcp port:
export DOCKER_HOST='tcp://localhost:2376'

Capistrano "Permission denied (publickey)." error message

I know that this problem has been asked many times, but I can't get it sorted (I'm a beginner).
What I'm trying to do is to deploy my rails application to my production server using capistrano. I stored my project on a directory on gitlab. Everything was working perfectly until I moved my application in an other gitlab repository (git#gitlab.com:myusername/xxxxxx.git).
I think I set up my deploy.rb file accordingly :
set :application, "xxxxxx"
set :user, "yyyyy"
set :repository, "git#gitlab.com:myusername/xxxxxx.git"
But when I try to deploy it, I get the permission error :
[xxxxxx.com] executing command
[xxxxxx.com] env PATH=/home/kar/.rbenv/shims:/home/kar/.rbenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin sh -c 'if [ -d /var/www/xxxxxx/shared/cached-copy ]; then cd /var/www/xxxxxx/shared/cached-copy && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard 97ff4f45240a680c1d278325d7ac1871536c8091 && git clean -q -d -x -f; else git clone -q git#gitlab.com:myusername/xxxxxx.git /var/www/xxxxxx/shared/cached-copy && cd /var/www/xxxxxx/shared/cached-copy && git checkout -q -b deploy 97ff4f45240a680c1d278325d7ac1871536c8091; fi'
** [xxxxxx.com :: err] Permission denied (publickey).
** [xxxxxx.com :: err] fatal: The remote end hung up unexpectedly
Could you please propose me some tests to find out from where the issue comes ?
Is there any key to add on my server ?
Thanks a lot for your help.
Here's the capistrano 3 plugin that is created solely for the purpose of troubleshooting problems like this one: capistrano-ssh-doctor.
The plugin outputs a report with found issues and suggested next steps.