Lsyncd Permission denied (publickey,password) - ssh

I am setting up lsyncd at for automatic sync local and remote folders. I have researched for many solution available, also adding extra params to the conf file.
I have also, updated the sshd_config with PermitRootLogin without-password
Also, I am able to ssh with password and also rsync without password manually tried but the problem is when I use it via lsyncd it give permission denied error 3 times and exit (seems like its asking for password).
lsyncd.conf.lua file
settings {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status",
statusInterval = 10
}
sync {
default.rsync,
source="/home/gaurav/Desktop/source/",
target="root#xxx.xxx.xx.xxx:/root/destination/",
rsync = {
compress = true,
acls = true,
verbose = true,
_extra = {"-P", "-e", "/usr/bin/ssh -p 22 -i /home/gaurav/.ssh/id_rsa -o StrictHostKeyChecking=no"}
}
}
Also tried with this one also.
settings = {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status"
}
sync {
default.rsyncssh,
source = "/home/gaurav/Desktop/source/",
host = "xxx.xxx.xx.xxx",
targetdir = "/root/destination/"
}
Logs
Sun Dec 7 17:18:09 2014 Normal: recursive startup rsync: /home/gaurav/Desktop/source/ -> root#xxx.xxx.xx.xxx:/root/destination/
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.1]
Sun Dec 7 17:18:12 2014 Error: Temporary or permanent failure on startup of "/home/gaurav/Desktop/source/". Terminating since "insist" is not set.

If you are using ubuntu 12.04 you must use rsyncOps instead rsync = {} block.
Try this:
sync {
default.rsync,
source="/var/www/",
target=server..":/var/www/",
excludeFrom="/etc/lsyncd/lsyncd-excludes.txt",
rsyncOps={"-e", "/usr/bin/ssh -o StrictHostKeyChecking=no", "-avz"}
}
https://www.stephenrlang.com/2015/12/how-to-install-and-configure-lsyncd/

Related

Terraform - Failed to set up SSH tunneling for host

Hell, I am trying to deploy rke k8s with terraform, but I am not able to connect to the desired host via ssh:
time="2022-02-28T11:17:38+01:00" level=warning msg="Failed to set up SSH tunneling for host [poc-k8s.my-domain.com]: Can't retrieve Docker Info: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.24/info\": Unable to access node with address [poc-k8s.my-domain.com:22] using SSH. Please check if you are able to SSH to the node using the specified SSH Private Key and if you have configured the correct SSH username. Error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain"
and this is the .tf file I am using:
terraform {
required_providers {
rke = {
source = "rancher/rke"
version = "1.3.0"
}
}
}
provider "rke" {
log_file = "rke_debug.log"
}
resource "rke_cluster" "cluster" {
nodes {
address = "poc-k8s.my-domain.com"
user = "root"
role = ["controlplane", "worker", "etcd"]
ssh_key = file("~/.ssh/root_key")
}
nodes {
address = "poc-k8s.my-domain.com"
user = "root"
role = ["worker", "etcd"]
ssh_key = file("~/.ssh/root_key")
}
addons_include = [
"https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml",
"https://gist.githubusercontent.com/superseb/499f2caa2637c404af41cfb7e5f4a938/raw/930841ac00653fdff8beca61dab9a20bb8983782/k8s-dashboard-user.yml",
]
}
resource "local_file" "kube_cluster_yaml" {
filename = "~/.kube/kube_config_cluster.yml"
sensitive_content = "rke_cluster.cluster.kube_config_yaml"
}
The key if of course correct and I am able to connect to the desired host:
ssh -i ~/.ssh/root_key root#poc-k8s.my-domain.com
what am I missing here?
[Update]
Cluster resource has delay_on_creation property that can be used
resource "rke_cluster" "cluster" {
delay_on_creation = 180
(...)
}
I'm facing a similar issue. On the second run of terrafor apply it works correctly. In my case the issue is that docker is not up fast enough for RKE provider.
I've found following workaround from citynetwork /
citycloud-examples:
resource "rke_cluster" "cluster" {
(...)
depends_on = [null_resource.wait-for-docker]
}
resource "null_resource" "wait-for-docker" {
provisioner "local-exec" {
command = "sleep 180"
}
depends_on = [
# list of servers docker being installed on
(...)
]
}
It waits for 180s which is not ideal, though.

SSH connect GCP

I at the deadlock. I try to connect to the created VM over SSH, but nothing comes out. Added the following entries to the terraform
provisioner "remote-exec" {
inline = [
"/bin/echo -e \"${element(random_string.password.*.result, count.index)}\n${element(random_string.password.*.result, count.index)}\" | /usr/bin/passwd root"
]
connection {
type = "ssh"
user = "root"
private_key = file(var.privat_google_key)
agent = false
timeout = "5m"
host = google_compute_instance.webserver[count.index].network_interface[0].access_config[0].nat_ip
}
}
...
resource "google_compute_project_metadata_item" "ssh-keys" {
key = "ssh-keys"
value = file(var.pub_google_key)
}
ssh-keys is added to instance.
When I finish, I get
google_compute_instance.webserver[0] (remote-exec): Connecting to remote host via SSH...
google_compute_instance.webserver[0] (remote-exec): Host: 1.1.1.1
google_compute_instance.webserver[0] (remote-exec): User: root
google_compute_instance.webserver[0] (remote-exec): Password: false
google_compute_instance.webserver[0] (remote-exec): Private key: true
google_compute_instance.webserver[0] (remote-exec): Certificate: false
google_compute_instance.webserver[0] (remote-exec): SSH Agent: false
google_compute_instance.webserver[0] (remote-exec): Checking Host Key: false
google_compute_instance.webserver[0]: Still creating... [5m0s elapsed]
Error: timeout - last error: SSH authentication failed (root#35.247.121.86:22): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
When I try to connect by ssh from the terminal, I get
ssh -i [PATH_TO_PRIVATE_KEY] [USERNAME]#[EXTERNAL_IP_ADDRESS]
root#1.1.1.1: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
I tried different options to add a key, but I always get forbidden access to the host. What could be wrong?
SSH for Root user is disabled by default on GCP. You will have to use a specific user to connect, you will still have root permissions. If you absolutely must connect using root account which is not recommended I suggest using a pre-built image or start-up scripts where you can enable it as described here, but I will not cover in this answer.
So, to achieve a connection on Terraform with a specific user you need to:
Change the connection user configuration in TF
connection {
user = "alexey"
...
}
Change the metadata SSH KEY to contain the username with the publickey in the format described here.
ssh-rsa [KEY_VALUE] [USERNAME]

Cannot have file provisioner working with Terraform on DigitalOcean

I try to use Terraform to create a DigitalOcean node on which consul is installed.
I'm using the following .tf file but it hangs up and do not copy the consul .zip file onto the droplet.
I got the following error message after a couple of minutes:
ssh: handshake failed: ssh: unable to authenticate, attempted methods
[none publickey], no supported methods remain
The droplets are correctly created though. I can login on command line with the key I specified (thus not specifying password). I'm guessing the connection part might be faulty but not sure what I'm missing.
Any idea ?
variable "do_token" {}
# Configure the DigitalOcean Provider
provider "digitalocean" {
token = "${var.do_token}"
}
# Create nodes
resource "digitalocean_droplet" "consul" {
count = "1"
image = "ubuntu-14-04-x64"
name = "consul-${count.index+1}"
region = "lon1"
size = "1gb"
ssh_keys = ["7b:51:d3:e3:ae:6e:c6:e2:61:2d:40:56:17:54:fc:e3"]
connection {
type = "ssh"
user = "root"
agent = true
}
provisioner "file" {
source = "consul_0.7.1_linux_amd64.zip"
destination = "/tmp/consul_0.7.1_linux_amd64.zip"
}
provisioner "remote-exec" {
inline = [
"sudo unzip -d /usr/local/bin /tmp/consul_0.7.1_linux_amd64.zip"
]
}
}
Terraform requires that you specify the private SSH key to use for the connection with private_key You can create a new variable containing the path to your private key for use with Terraform's file interpolation function:
connection {
type = "ssh"
user = "root"
agent = true
private_key = "${file("${var.private_key_path}")}"
}
You face this issue, because you have a ssh key protected by a password. To solve this issue you should generate a key without password.

JSCH setCommand is not working

No Exception comes and Command is also not making any work based on command mentioned.Here permisson of directory is not created and directory is also not created.Please give your suggestion.
Update :
channelexe.getExitStatus is added but problem is it gives -1, what is the meaning of this ?. I don't know how to find some explaination why command is not doing it's job(update 777 mode of fileDir1) .
String depDir = "/usr/local/FTPReceive/DEPLOYED/fileDir1";
log.info("updateDepositedFilePermission ........ starts");
Session session = new FTPComponent().getSession("");
Channel channel = null;
ChannelSftp channelSftp = null;
try
{
session.connect();
System.out.println("session is alive:" + session.isConnected());
channel = session.openChannel("sftp");
channel.connect();
channelSftp = (ChannelSftp) channel;
ChannelExec channelexe = (ChannelExec) session.openChannel("exec");
channelexe.setCommand("chmod 777 -R " + depDir);
channelexe.connect();
System.out.println("channelexe.getExitStatus:"+channelexe.getExitStatus());
}
catch (Exception e1)
{
e1.printStackTrace();
System.out.println("Manual Exception in updateDepositedFilePermission:" + CommonUtil.getExceptionString(e1));
}
channelexe.setCommand("chmod 777 -R " + depDir);
channelexe.setCommand("mkdir /usr/local/fileStore");
channelexe.connect();
A ChannelExec accepts a single command string to invoke on the remote system. Your second call to setCommand() is discarding the chmod command and replacing it with the mkdir command. Assuming the remote shell is bash or similar, you could use shell syntax to construct a command string which runs both commands:
String cmd = "chmod 777 -R " + depDir + " && mkdir /usr/local/fileStore";
channelexe.setCommand(cmd);
No Exception comes...
ChannelExec doesn't throw an exception when a command merely fails. You can call Channel.getExitStatus() to get the exit status of the remote command. The value will be 0 if chmod and mkdir succeeded, or non-zero if they failed. The channel also has functions to read the standard error of the remote command, which will permit you to read any error messages which they output.
The JSCH website has several example programs, including an example of executing a remote command.

Gulp task to SSH and then mysqldump

So I've got this scenario where I have separate Web server and MySQL server, and I can only connect to the MySQL server from the web server.
So basically everytime I have to go like:
step 1: 'ssh -i ~/somecert.pem ubuntu#1.2.3.4'
step 2: 'mysqldump -u root -p'password' -h 6.7.8.9 database_name > output.sql'
I'm new to gulp and my aim was to create a task that could automate all this, so running one gulp task would automatically deliver me the SQL file.
This would make the developer life a lot easier since it would just take a command to download the latest db dump.
This is where I got so far (gulpfile.js):
////////////////////////////////////////////////////////////////////
// Run: 'gulp download-db' to get latest SQL dump from production //
// File will be put under the 'dumps' folder //
////////////////////////////////////////////////////////////////////
// Load stuff
'use strict'
var gulp = require('gulp')
var GulpSSH = require('gulp-ssh')
var fs = require('fs');
// Function to get home path
function getUserHome() {
return process.env.HOME || process.env.USERPROFILE;
}
var homepath = getUserHome();
///////////////////////////////////////
// SETTINGS (change if needed) //
///////////////////////////////////////
var config = {
// SSH connection
host: '1.2.3.4',
port: 22,
username: 'ubuntu',
//password: '1337p4ssw0rd', // Uncomment if needed
privateKey: fs.readFileSync( homepath + '/certs/somecert.pem'), // Uncomment if needed
// MySQL connection
db_host: 'localhost',
db_name: 'clients_db',
db_username: 'root',
db_password: 'dbp4ssw0rd',
}
////////////////////////////////////////////////
// Core script, don't need to touch from here //
////////////////////////////////////////////////
// Set up SSH connector
var gulpSSH = new GulpSSH({
ignoreErrors: true,
sshConfig: config
})
// Run the mysqldump
gulp.task('download-db', function(){
return gulpSSH
// runs the mysql dump
.exec(['mysqldump -u '+config.db_username+' -p\''+config.db_password+'\' -h '+config.db_host+' '+config.db_name+''], {filePath: 'dump.sql'})
// pipes output into local folder
.pipe(gulp.dest('dumps'))
})
// Run search/replace "optional"
SSH into the web server runs fine, but I have an issue when trying to get the mysqldump, I'm getting this message:
events.js:85
throw er; // Unhandled 'error' event
^
Error: Warning:
If I try the same mysqldump command manually from the server SSH, I get:
Warning: mysqldump: unknown variable 'loose-local-infile=1'
Followed by the correct mylsql dump info.
So I think this warning message is messing up my script, I would like to ignore warnings in cases like this, but don't know how to do it or if it's possible.
Also I read that using the password directly in the command line is not really good practice.
Ideally, I would like to have all the config vars loaded from another file, but this is my first gulp task and not really familiar with how I would do that.
Can someone with experience in Gulp orient me towards a good way of getting this thing done? Or do you think I shouldn't be using Gulp for this at all?
Thanks!
As I suspected, that warning message was preventing the gulp task from finalizing, I got rid of it by commenting the: loose-local-infile=1 From /etc/mysql/my.cnf