Capistrano: cap staging git:check - how to configure ssh path in git-ssh.sh? - ssh

I want to figure out how to set the path to ssh in git-ssh.sh file copied to the server by capistrano after executing the deploy command.
Actually the second line of git-ssh.sh looks like:
exec /usr/bin/ssh -o PasswordAuthentication=no -o StrictHostKeyChecking=no "$#"
I can not execute this command directly on the server. The following error occurs:
[5b4fcea9] /tmp/app.de/git-ssh.sh: line 2: /usr/bin/ssh: No such file or directory
After editing the ssh path to /usr/local/bin/ssh it works well but capistrano will upload this file every time calling cap staging deploy.
See my logs on pastie for more details, specially in the git:check part:
http://pastie.org/9523811
It is possible to set this path in my deploy.rb?
Thanks & cheers
Mirko

Yeah, i got it. :))
Rake::Task["deploy:check"].clear_actions
namespace :deploy do
task check: :'git:wrapper' do
on release_roles :all do
execute :mkdir, "-p", "#{fetch(:tmp_dir)}/#{fetch(:application)}/"
upload! StringIO.new("#!/bin/sh -e\nexec /usr/local/bin/ssh -o PasswordAuthentication=no -o StrictHostKeyChecking=no \"$#\"\n"), "#{fetch(:tmp_dir)}/#{fetch(:application)}/git-ssh.sh"
execute :chmod, "+x", "#{fetch(:tmp_dir)}/#{fetch(:application)}/git-ssh.sh"
end
end
end

This line seems to be hardcoded in capistrano source code link here.
Instead of changing capistrano source, why don't you create /usr/bin/ssh symlink on the server? So this:
ln -s /usr/local/bin/ssh /usr/bin/ssh
That will create a /usr/bin/ssh symlink that, when executed, will run /usr/local/bin/ssh.

In your Capfile if you add the following line you can override the git tasks:
require 'capistrano/git'

Related

Docker entrypoint initdb PERMISSION DENIED

I am getting the following error when I run docker-compose up:
Thanks a lot for your help
I resolved this problem by adding this to the Dockerfile after it copies the scripts to docker-entrypoint-initdb.d
RUN chown -R mysql:mysql /docker-entrypoint-initdb.d/
Example Dockerfile:
FROM mysql:latest
ENV MYSQL_DATABASE NAME_DATABASE
ENV MYSQL_ROOT_PASSWORD ***********
COPY ./sql-scripts/ /docker-entrypoint-initdb.d/
RUN chown -R mysql:mysql /docker-entrypoint-initdb.d/
EXPOSE 3306
CMD ["mysqld", "--character-set-server=utf8mb4", "--collation-server=utf8mb4_unicode_ci"]
The next step is to build the image:
docker build -t image-db:latest .
The next step is to create the container
docker run -d -p 3306:3306 --name container-db image-db:latest
You should not override the postgres image entrypoint. It is designed to look for .sql files in /docker-entrypoint-initdb.d/ directory (See line in script).
You should just mount your .sql files into /docker-entrypoint-initdb.d/ and it should be processed on startup (only if database does not already exist)
I had the same issue, however, my problem occurred due to Linux user. I am using root as a runner so the problem happened because the mounting volume in the local machine did not have permissions. in this regard, I used chmod -R 777 scripts and it worked fine. Technically, you need to set permissions for both local machine and your container.

ssh connection to Vagrant virtual machine using Ansible fails

I'm new to Ansible.I set-up an Ubuntu virtual machine using Vagrant. I'm able to ssh into the machine using ssh vagrant#172.16.23.228. I have created an ssh key with the same password as the vm, added it to the agent and specified the path in my hosts file.
After following the instructions here I started to receive the following errors, when running this command (ansible all --inventory-file=hosts.ini --module-name ping -u vagrant -vvvv):
Not sure what I'm missing from my set-up, what else I need to check?
<172.16.23.228> ESTABLISH CONNECTION FOR USER: vagrant
<172.16.23.228> REMOTE_MODULE ping
<172.16.23.228> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/user/.ansible/cp/ansible-ssh-%h-%p-%r" - o Port=22 -o IdentityFile="~Users/user/.ssh/onemachine_rsa" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 172.16.23.228 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1451080871.59-247915080664557 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1451080871.59-247915080664557 && echo $HOME/.ansible/tmp/ansible-tmp-1451080871.59-247915080664557'
172.16.23.228 | FAILED => SSH Error: tilde_expand_filename: No such user Users
while connecting to 172.16.23.228:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
My hosts file looks like:
[testserver]
172.16.23.228 ansible_ssh_port=22 ansible_ssh_user=vagrant ansible_ssh_private_key_file=~Users/user/.ssh/onemachine_rsa
What you're doing can work, but I highly recommend using the built-in Ansible provisioner in Vagrant. It will make your life easier and improve your Vagrant skills at the same time. And if you need to execute any shell scripts, use the shell provisioner.
Providing this answer for the benefit of those, like me, who arrive later at the party. Latest Vagrant installations install a private key in a local directory instead of using the admittedly insecure private key for every VM. You'll have to create an ansible_hosts file like this one:
[vagrantboxes]
jessie ansible_ssh_port=2222 ansible_ssh_host=127.0.0.1
[vagrantboxes:vars]
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
Where the key is the last line, which provides a path to the actual private key used in the virtual machine that has been started up from this particular directory.
The path to your ansible_ssh_private_key_file is incorrect. Try ansible_ssh_private_key_file=~/.ssh/onemachine_rsa instead. The tilde in this case expands to the home directory of your user on the local machine you're running ansible from.

Permission denied using ssh command in shell

I'm trying to execute this shell with command line
host="192.168.X.XXX"
user="USERNAME"
pass="MYPASS"
sshpass -p "$pass" scp -o StrictHostKeyChecking=no /home/MYPATH/File.import "$user#$host:/"home/MYPATH/
To copy a file from my local server in to remote server. The remote server is a copy of the remote server but when I try to execute this shell I have this error:
**PERMISSION DENIED, PLEASE TRY AGAIN**
I didn't understand why if I try to execute this command in command line is working.
USERNAME#MYSERVER:~$ sshpass -p 'MYPASS' scp -o StrictHostKeyChecking=no /home/MYPATH/File.import USERNAME#192.168.X.XXX:/home/MYPATH/
Somebody have a solution??
Please use a pipe or the -e option for the password anyway.
export SSHPASS=password
sshpass -e ssh user#remote
Your simple command with -e option:
export SSHPASS=password
sshpass -e scp -o StrictHostKeyChecking=no /home/MYPATH/File.import user#192.168.X.XXX:/home/MYPATH/
Please remove the wrong quotes from your command:
sshpass -p "$pass" scp -o StrictHostKeyChecking=no /home/MYPATH/File.import $user#$host:/home/MYPATH/
You should also be able to remove the quotes around $pass.
Please ensure that you have no special characters in your pass variable or escape them correctly (and no typos anywhere).
For simplicity use a ssh command instead of scp for testing
Use the -v or -vvv option for the scp command to check what scp is trying to do. Also check the secure log or auth.log on the remote server
You have to install "sshpass" command then use the below snippet
export SSHPASS=password
sshpass -e sftp user#hostname << !
cd sftp_path
put filename
bye
!
A gotchya that I encountered was escaping special characters in the password which wasn't necessary when entering it in interactive ssh mode.

run job file via ssh command order how?

about run script.sh via ssh
#!/bin/bash
/usr/local/cpanel/scripts/cpbackup
clamscan -i -r --remove /home/
exit
are that mean run /usr/local/cpanel/scripts/cpbackup and after finished run clamscan -i -r --remove /home/
or run two command at same time ???
Commands in a script are run one at a time in order unless any of the commands "daemonizes" itself.

set environment variable SSH_ASKPASS or askpass in sudoers, resp

I'm trying to login to a ssh server and to execute something like:
ssh user#domain.com 'sudo echo "foobar"'
Unfortunately I'm getting an error:
sudo: no tty present and no askpass program specified
Google told me to either set the environment variable SSH_ASKPASS or to set askpass in the sudoers file. My remote machine is running on Debian 6 and I've installed the packages ssh-askpass and ssh-askpass-gnome and my sudoers file looks like this:
Defaults env_reset
Defaults askpass=/usr/bin/ssh-askpass
# User privilege specification
root ALL=(ALL) ALL
user ALL=(ALL) ALL
Can someone tell what I'm doing wrong and how to do it better.
There are two ways to get rid of this error message. The easy way is to provide a pseudo terminal for the remote sudo process. You can do this with the option -t:
ssh -t user#domain.com 'sudo echo "foobar"'
Rather than allocating a TTY, or setting a password that can be seen in the command line, do something like this.
Create a shell file that echo's out your password like:
#!/bin/bash
echo "mypassword"
then copy that to the node you want using scp like this:
scp SudoPass.sh somesystem:~/bin
Then when you ssh do the following:
ssh somesystem "export SUDO_ASKPASS=~/bin/SudoPass.sh;sudo -A command -parameter"
Another way is to run sudo -S in order to "Write the prompt to the standard error and read the password from the standard input instead of using the terminal device" (according to man) together with cat:
cat | ssh user#domain.com 'sudo -S echo "foobar"'
Just input the password when being prompted to.
One advantage is that you can redirect the output of the remote command to a file without "[sudo] password for …" in it:
cat | ssh user#domain.com 'sudo -S tar c --one-file-system /' > backup.tar
Defaults askpass=/usr/bin/ssh-askpass
ssh-askpass requires X server, so instead of providing a terminal (via -t, as suggested by nosid), you may forward X connection via -X:
ssh -X user#domain.com 'sudo echo "foobar"'
However, according to current documentation, askpass is set in sudo.conf as Path, not in sudoers.
How about adding this in the sudoers file:
user ALL=(ALL) NOPASSWD: ALL