ssh hanged in crontab - ssh

I want to use openssh client to automatically execute some commands remotely. So I modified the function channel_handle_wfd(struct ssh *ssh, Channel *c) in channels.c like the following:
if(0 == strncmp(buf + dlen - 2, "$ ", 2))
{
switch(step++)
{
case 0:
sshbuf_put(c->input, "stty -echo\r\n", 12);
break;
case 1:
sshbuf_put(c->input, "sz foo\r\n", 8);
break;
case 2:
do_zmodem();
break;
case 3:
sshbuf_put(c->input, "\004", 1);
break;
default:
break;
}
}
the program runs successfully in console:
Last login: Tue Mar 1 2:30:01 2022 from 1.2.3.4
~$ stty -echo
~$ ~$ OO ~$ logout
Connection to 1.2.3.5 closed
but hanged in crontab:
Last login: Tue Mar 1 2:32:01 2022 from 1.2.3.4
~$ stty

Related

lsync setup does not connect to remote

I configured my lsyncd as follows: nano /etc/lsyncd/lsyncd.conf.lua
How to correctly configure this file?
settings {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd-status.log",
statusInterval = 2
}
sync {
default.rsync,
source="/home/john/Documents/reprogramming",
target="john.doe#localhost:~/repgrogramming",
rsync = {
archive = false,
acls = false,
chmod = "D2755,F644",
compress = true,
links = false,
owner = false,
perms = false,
verbose = true,
rsh = "ssh -p 2222 -l john -i /home/john/.ssh/id_rsa -o StrictHostKeyChecking=no"
}
}
This is the error message:
john#john:~$ tail -10 /var/log/lsyncd/lsyncd.log
Disconnected from 127.0.0.1 port 2222
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(228) [sender=3.2.3]
Thu Jun 16 11:11:01 2022 Error: Temporary or permanent failure on startup of /home/john/Documents/reprogramming/ -> john.doe#localhost:~/reprogramming/. Terminating since "insist" is not set.
Thu Jun 16 11:35:46 2022 Normal: --- Startup, daemonizing ---
Thu Jun 16 11:35:46 2022 Normal: recursive startup rsync: /home/john/Documents/reprogramming/ -> john.doe#localhost:~/reprogramming/
ssh: connect to host localhost port 2222: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(228) [sender=3.2.3]
Thu Jun 16 11:35:46 2022 Error: Temporary or permanent failure on startup of /home/john/Documents/reprogramming/ -> john.doe#localhost:~/reprogramming/. Terminating since "insist" is not set.
If I use: ssh john.doe#localhost -p 2222 it connects automatically.

RabbitMQ messages are not consummed

I would like to use RabbitMQ to send messages from a webapp backend to a second module. On my laptop, it works, but when I deploy the application on a VPS, even in dev mode, it doesn't work anymore... Could you please help me solve this out?
Current status :
If I check the queues on the VPS where both modules are installed, then, it looks ok (messages are added in the queue)
$ rabbitmqctl list_queues
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
MyMessages 2
When I launch the second module, I get following log :
Waiting for a request on queue : MyMessages, hosted at localhost
Comming from the following java code :
public static void main(String[] args) throws IOException, TimeoutException {
RabbitMQConsumer rabbitMQConsumer = new RabbitMQConsumer();
rabbitMQConsumer.waitForRequests();
System.out.println("Waiting for a request on queue : " + AppConfig.QUEUE_NAME + ", hosted at " + AppConfig.QUEUE_HOST);
}
public RabbitMQConsumer() throws IOException, TimeoutException {
mapper = new ObjectMapper();
ConnectionFactory connectionFactory = new ConnectionFactory();
connectionFactory.setHost(AppConfig.QUEUE_HOST);
Connection connection = connectionFactory.newConnection();
channel = connection.createChannel();
}
public void waitForRequests() throws IOException {
DefaultConsumer consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
try {
System.out.println("Message received ! ");
channel.basicAck(envelope.getDeliveryTag(), false);
} catch (Exception e) {
e.printStackTrace();
}
}
};
channel.queueDeclare(AppConfig.QUEUE_NAME, true, false, false, null);
channel.basicConsume(AppConfig.QUEUE_NAME, consumer);
}
I think both modules are looking at the same queue, there are messages in the quue, so... to me, it looks like messages are not consummed... I've looked at the status of rabbitMQ, but I do not know how to use it :
$ invoke-rc.d rabbitmq-server status
● rabbitmq-server.service - RabbitMQ broker
Loaded: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2018-04-07 18:24:59 CEST; 1h 38min ago
Process: 17103 ExecStop=/usr/lib/rabbitmq/bin/rabbitmqctl shutdown (code=exited, status=0/SUCCESS)
Main PID: 17232 (beam.smp)
Status: "Initialized"
Tasks: 84 (limit: 4915)
CGroup: /system.slice/rabbitmq-server.service
├─17232 /usr/lib/erlang/erts-9.3/bin/beam.smp -W w -A 64 -P 1048576 -t 5000000 -stbt db -zdbbl 1280000 -K true -- -root /usr/lib/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/r
abbitmq/lib/rabbitmq_server-3.7.4/ebin -noshell -noinput -s rabbit boot -sname rabbit#vps5322 -boot start_sasl -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -sasl sasl_err
or_logger false -rabbit lager_log_root "/var/log/rabbitmq" -rabbit lager_default_file "/var/log/rabbitmq/rabbit#vps5322.log" -rabbit lager_upgrade_file "/var/log/rabbitmq/rabbit#vps5322_upgrade.log" -r
abbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/plugins:/usr/lib/rabbitmq/lib/rabbitmq_server-3.7.4/plugins" -rabbit plugins_expand_dir "/var/lib/rabbitmq/
mnesia/rabbit#vps5322-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit#vps5322" -kernel inet_dist_listen_m
in 25672 -kernel inet_dist_listen_max 25672
├─17319 /usr/lib/erlang/erts-9.3/bin/epmd -daemon
├─17453 erl_child_setup 1024
├─17475 inet_gethost 4
└─17476 inet_gethost 4
Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ## ##
Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ## ## RabbitMQ 3.7.4. Copyright (C) 2007-2018 Pivotal Software, Inc.
Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ########## Licensed under the MPL. See http://www.rabbitmq.com/
Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ###### ##
Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: ########## Logs: /var/log/rabbitmq/rabbit#vps5322.log
Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: /var/log/rabbitmq/rabbit#vps5322_upgrade.log
Apr 07 18:24:57 vps5322 rabbitmq-server[17232]: Starting broker...
Apr 07 18:24:59 vps5322 rabbitmq-server[17232]: systemd unit for activation check: "rabbitmq-server.service"
Apr 07 18:24:59 vps5322 systemd[1]: Started RabbitMQ broker.
Apr 07 18:24:59 vps5322 rabbitmq-server[17232]: completed with 0 plugins.
Finally, note that the webapp application is a PlayFramework app, with these dependencies :
libraryDependencies ++= Seq(
guice,
"com.rabbitmq" % "amqp-client" % "5.2.0"
)
Whereas the second module is a pure java code, based on maven, with the following pom :
<dependency>
<groupId>com.rabbitmq</groupId>
<artifactId>amqp-client</artifactId>
<version>5.2.0</version>
</dependency>
Any idea of the problem?
Thank you very much !!
Finally I've found the problem. This configuration is actually working, but I could not see it because of a crash in my own app that was not logged because of an error in my log4J configuration.
Just in case, the error I had was that a local library included in my pom with a relative path (${project.basedir}) was found by my IDE but not anymore once deployed on a VPS. To solve this, I've just moved this (hopefully) very small library directly into my project. After solving this issue, I had to reset rabbitMQ and then it was all fine :
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl start_app
Thank you very much,
Regards,

Vagrant ssh stuck with "default: Warning: Connection timeout. Retrying..."

I am running vagrant(1.7.4)-salt on Virtual box 4.3 on a headless ubuntu 14.04. Salt is a standalone one.The reason I am using these version is because the work on my local ubuntu.
On vagrant up I get the following output:
==> default: Importing base box 'phusion/ubuntu-14.04-amd64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'phusion/ubuntu-14.04-amd64' is up to date...
==> default: Setting the name of the VM: drupal_default_1452863894453_19933
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 => 2222. Now on port 2201.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: Forwarding ports...
default: 22 => 2201 (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2201
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
vagrant ssh-config gives:
Host default
HostName 127.0.0.1
User vagrant
Port 2201
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/user/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
My Vagrantfile is:
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.host_name = "site#{rand(0..999999)}"
config.vm.provider "virtualbox" do |v|
config.ssh.insert_key = false
v.memory = 2048
v.cpus = 1
end
## For masterless, mount your salt file root
config.vm.synced_folder "salt/roots/","/srv/salt/"
# Network
config.vm.network :private_network, ip: "172.16.0.100"
# Server provisioner
config.vm.provision :salt do |salt|
salt.masterless = true
salt.minion_config = "salt/minion"
salt.run_highstate = true
salt.bootstrap_options = "-P"
end
# Provisioning scripts
config.vm.provision "dbsync", type: "shell", path: "provision/db.sh"
end
What could have missed? Any ubuntu network configuration? Any ssh configuration?

password prompt keep on coming on the console

I wrote below command, which will copy the id_dsa.pub file to other server as part of my auto login feature. But every time below message is coming on the console:
spawn scp -o StrictHostKeyChecking=no /opt/mgtservices/.ssh/id_dsa.pub root#12.43.22.47:/root/.ssh/id_dsa.pub
Password:
Password:
Below script I wrote for this:
function sshkeygenerate()
{
if ! [ -f $HOME/.ssh/id_dsa.pub ] ;then expect -c" spawn ssh-keygen -t dsa -f $HOME/.ssh/id_dsa
expect y/n { send y\r ; exp_continue } expect passphrase): { send \r ; exp_continue}expect again: { send \r ; exp_continue}
spawn chmod 700 $HOME/.ssh && chmod 700 $HOME/.ssh/*
exit "
fi
expect -c"spawn scp -o StrictHostKeyChecking=no $HOME/.ssh/id_dsa.pub root"#"12.43.22.47:/root/.ssh/id_dsa.pub
expect *assword: { send $ROOTPWD\r }expect yes/no { send yes\r ; exp_continue }
spawn ssh -o StrictHostKeyChecking=no root"#"12.43.22.47 \"chmod 755 /root/.ssh/authorized_keys\"
expect *assword: { send $ROOTPWD\r }expect yes/no { send yes\r ; exp_continue }
spawn ssh -o StrictHostKeyChecking=no root"#"12.43.22.47 \"cat /root/.ssh/id_dsa.pub >> /root/.ssh/authorized_keys\"
expect *assword: { send $ROOTPWD\r }expect yes/no { send yes\r; exp_continue }
sleep 1
exit"
}
You should first create a passwordless ssh to the destination server, then you won't need to enter the password while you will do the scp.
Assuming 192.168.0.11 is the destination machine:
1) ssh-keygen -t rsa
2) ssh sheena#192.168.0.11 mkdir -p .ssh
3) cat .ssh/id_rsa.pub | ssh sheena#192.168.0.11 'cat >> .ssh/authorized_keys'
4) ssh sheena#192.168.0.11 "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"
Link for the refernce:
http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/

upgrade redis 2.4.14 to redis 2.6.14, command "service redis start" always hangs

I installed redis2.4.14 before.
Yestoday, I got redis2.6.14, and directly "cd redis-2.6.14/src ; make && make install".
and I removed dump.rdb and redis.log of redis-2.4.14.
I also upgraded the configuration file to 2.6.14.
I added redis to service when I installed redis-2.4.14.
I execute command "service redis start", but it always hangs with no "ok" information.
[tys#localhost bin]# service redis start
Starting redis-server:
I can use redis nomally
[tys#localhost redis]# redis-cli
redis 127.0.0.1:6379> set name tys
OK
redis 127.0.0.1:6379> get name
"tys"
but if I type "ctrl + c" or "ctrl + z", "redis-cli" will hang on.
when I reboot the system, linux boot process hangs on "Starting redis-server"
(Sorry, I am too "young" to post image. https://groups.google.com/forum/#!topic/redis-db/iQnlyAAWE9Y)
But I can ssh it.It's a virtual machine.
There is no error in the redis.log.
[1420] 11 Aug 04:27:05.879 # Server started, Redis version 2.6.14
[1420] 11 Aug 04:27:05.880 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
[1420] 11 Aug 04:27:05.903 * DB loaded from disk: 0.023 seconds
[1420] 11 Aug 04:27:05.903 * The server is now ready to accept connections on port 6379
Here is my redis init.d script :
#!/bin/bash
#
#redis - this script starts and stops the redis-server daemon
#
# chkconfig: 235 90 10
# description: Redis is a persistent key-value database
# processname: redis-server
# config: /etc/redis.conf
# config: /etc/sysconfig/redis
# pidfile: /var/run/redis.pid
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
redis="/usr/local/bin/redis-server"
prog=$(basename $redis)
REDIS_CONF_FILE="/etc/redis.conf"
[ -f /etc/sysconfig/redis ] && . /etc/sysconfig/redis
lockfile=/var/lock/subsys/redis
start() {
[ -x $redis ] || exit 5
[ -f $REDIS_CONF_FILE ] || exit 6
echo -n $"Starting $prog: "
daemon $redis $REDIS_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $prog: "
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
stop
start
}
reload() {
echo -n $"Reloading $prog: "
killproc $redis -HUP
RETVAL=$?
echo
}
force_reload() {
restart
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}"
exit 2
esac
I resovled it with Josiah' help in the https://groups.google.com/forum/#!forum/redis-db.
It's "daemonize no" in my redis.conf. Redis started nomally, after I switched to "daemonize yes".
Version 3.0.1 # CentOS 6.6 - the same problem.
Tried with two different init scripts.
'daemonize yes' solves the problem!