This is hard to explain for me, however.
I want to push a time off set to puppet for a cron job.
define cron::job (
$url,
$time_offset,
$ensure = 'present',
$minute = 5 + $time_offset,
)
Then in my actual job I want to use */ as well as the minute variable. Is this possible? As my current implementation is failing and I can't seem to find the answer in the docs which suggests I'm going about this the complete incorrect way.
This is my cron job.
cron { "job":
command => "wget -O - --save-cookies cookies.txt --load-cookies cookies.txt --keep-session-cookies https://${url}/ >/dev/null 2>/dev/null",
user => 'housekeeper',
minute => '*/$minute',
ensure => $ensure,
}
I would appreciate any feedback / suggestions.
Would I be better off just using a set minute rather than every 5 minutes for example?
The reason I want to do this, is I want to leave the cron jobs the same and just pass an offset to the class for each site.
This works:
define cron::job (
$url,
$time_offset,
) {
$minute = 5 + $time_offset
cron { "cron ${name}":
ensure => present,
command => "wget -O - --save-cookies cookies.txt --load-cookies cookies.txt --keep-session-cookies https://${url}/ >/dev/null 2>/dev/null",
user => 'housekeeper',
minute => "*/${minute}",
require => User['housekeeper'],
}
}
user { 'housekeeper':
ensure => present,
}
cron::job { 'job1':
url => 'http://example1.com',
time_offset => 10,
}
cron::job { 'job2':
url => 'http://example2.com',
time_offset => 15,
}
Then
[root#centos-72-x64 ~]# puppet apply /tmp/foo.pp
Notice: Compiled catalog for centos-72-x64 in environment production in 0.21 seconds
Notice: /Stage[main]/Main/User[housekeeper]/ensure: created
Notice: /Stage[main]/Main/Cron::Job[job2]/Cron[cron job2]/ensure: created
Notice: /Stage[main]/Main/Cron::Job[job1]/Cron[cron job1]/ensure: created
Notice: Finished catalog run in 0.05 seconds
And
[root#centos-72-x64 ~]# cat /var/spool/cron/housekeeper
# HEADER: This file was autogenerated at 2016-04-12 11:29:15 +0000 by puppet.
# HEADER: While it can still be managed manually, it is definitely not recommended.
# HEADER: Note particularly that the comments starting with 'Puppet Name' should
# HEADER: not be deleted, as doing so could cause duplicate cron jobs.
# Puppet Name: cron job2
*/20 * * * * wget -O - --save-cookies cookies.txt --load-cookies cookies.txt --keep-session-cookies https://http://example2.com/ >/dev/null 2>/dev/null
# Puppet Name: cron job1
*/15 * * * * wget -O - --save-cookies cookies.txt --load-cookies cookies.txt --keep-session-cookies https://http://example1.com/ >/dev/null 2>/dev/null
Related
ansible.posix.synchronize, a wrapper for rsync, is failing with message
"msg": "Warning: Permanently added <host> (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n"
My playbook
---
- name: Test rsync
hosts: all
become: yes
become_user: postgres
tasks:
- name: Populate scripts/common using copy
copy:
src: common/
dest: /home/postgres/scripts/common
- name: Populate scripts/common using rsync
ansible.posix.synchronize:
src: common/
dest: /home/postgres/scripts/common
Populate scripts/common using copy executes with no problem.
Full error output
fatal: [<host>]: FAILED! => {
"changed": false,
"cmd": "sshpass -d3 /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' /opt/common/ pg_deployment#<host>t:/home/postgres/scripts/common",
"invocation": {
"module_args": {
"_local_rsync_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"_local_rsync_path": "rsync",
"_substitute_controller": false,
"archive": true,
"checksum": false,
"compress": true,
"copy_links": false,
"delay_updates": true,
"delete": false,
"dest": "pg_deployment#<host>:/home/postgres/scripts/common",
"dest_port": null,
"dirs": false,
"existing_only": false,
"group": null,
"link_dest": null,
"links": null,
"mode": "push",
"owner": null,
"partial": false,
"perms": null,
"private_key": null,
"recursive": null,
"rsync_opts": [],
"rsync_path": "sudo -u postgres rsync",
"rsync_timeout": 0,
"set_remote_user": true,
"src": "/opt/common/",
"ssh_args": null,
"ssh_connection_multiplexing": false,
"times": null,
"verify_host": false
}
},
"msg": "Warning: Permanently added '<host>' (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n",
"rc": 5
}
Notes:
User pg_deployment has passwordless sudo to postgres. This ansible playbook is being run inside a docker container.
After messing with it a bit more, I found that I can directly run the rsync command (not using ansible)
SSHPASS=<my_ssh_pass> sshpass -e /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres/
The only difference I can see is I used sshpass -e while ansible defaulted to sshpass -d#. Could the credentials ansible was trying to pass in be incorrect? If they are incorrect for ansible.posix.synchronize then why aren't they incorrect for other ansible tasks?
EDIT
Confirmed that if I run
sshpass -d10 /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres
(I chose random number for the file descriptor d10) I get the same error as above
"msg": "Warning: Permanently added <host> (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n"
Suggesting that the problem is whatever ansible is using as the file descriptor? It isn't a huge problem since I can just pass in the sshpass as an env variable in my docker container since it's ephemeral, but I still would like to know what is going on with ansible here.
SOLUTION (using command)
---
- name: Create Postgres Cluster
hosts: all
become: yes
become_user: postgres
tasks:
- name: Create Scripts Directory
file:
path: /home/postgres/scripts
state: directory
- name: Populate scripts/common
command: sshpass -e /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres/scripts
delegate_to: 127.0.0.1
become: no
I am trying to pass a facter argument to puppet apply. Here's what I tried:
export FACTER_command="start"
puppet apply site.pp $FACTER_command
and in my code I have:
exec { 'some_exec':
command => '/bin/bash -c "/some/path/to/scripts.sh -t some_arg $::command"',
...
I get this message error:
Error: '/bin/bash -c "/some/path/to/scripts.sh -t some_arg $::command"' returned 1 instead of one of [0]
Error: /Stage[main]/Standard/Exec[some_exec]/returns: change from 'notrun' to ['0'] failed: '/some/path/to/scripts.sh -t some_arg $::command"' returned 1 instead of one of [0]
Does anyone have any idea about this?
UPDATE
class standard{
$param_test="/some/path/to/scripts.sh -t some_arg ${::command}"
file{'kick_servers':
ensure => 'file',
path => '/some/path/to/scripts.sh',
owner => 'some_user,
group => 'some_user',
mode => '0755',
notify => Exec['some_exec']
}
exec { 'some_exec':
command => '/bin/bash -c ${param_test}',
cwd => "$home_user_dir",
timeout => 1800
}
}
node default {
include standard
}
And I get this error
Error: '/bin/bash -c $param_test' returned 2 instead of one of [0]
Error: /Stage[main]/Standard/Exec[some_exec]/returns: change from 'notrun' to ['0'] failed: '/bin/bash -c $param_test' returned 2 instead of one of [0]
Yes. You have at least two issues you need to fix there:
1/
The immediate cause of your problem is you have enclosed $::command inside single quotes, telling Puppet that you mean the literal string $::command, when you actually want the value of the fact there.
2/
You should not be passing $FACTER_command as an argument to puppet apply; you only need to export it as an environment variable (which you already did).
So:
Change puppet apply to:
puppet apply site.pp
Change your exec to:
exec { 'some_exec':
command => "/bin/bash -c '/some/path/to/scripts.sh -t some_arg ${::command}'",
...
}
I have
celery==3.1.23
Django==1.9.1
redis==2.10.5
ii redis-server 2:2.8.19-3 amd64 Persistent key-value database with networ
ii redis-tools 2:2.8.19-3 amd64 Persistent key-value database with networ
My configuration settings have the lines
# Celery
BROKER_URL = 'redis://127.0.0.1:6379/0'
BROKER_TRANSPORT = 'redis'
# start worker with '$ celery -A intro worker -l debug'
and my configuration file celery.py (standard practice is to name it this way, but confusing in my opinion) is
from __future__ import absolute_import
import os
import django
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'intro.settings')
django.setup()
app = Celery('intro')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
The config /etc/default/celeryd (also confusing naming) is
# copy this file to /etc/default/celeryd
CELERYD_NODES="w1 w2 w3"
VIRTUAL_ENV_PATH="/srv/intro/bin"
# JRT
CELERY_BIN="${VIRTUAL_ENV_PATH}/celery"
# Where to chdir at start.
CELERYD_CHDIR="/srv/intro/intro"
# Python interpreter from environment.
ENV_PYTHON="$VIRTUAL_ENV_PATH/python"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
# How to call "manage.py celeryctl"
CELERYCTL="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryctl"
# Extra arguments to celeryd NOTE --beat is vital, otherwise scheduler
# will not run
CELERYD_OPTS="--concurrency=1 --beat"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="jimmy"
CELERYD_GROUP="jimmy"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="intro.settings"
#CELERY_BROKER_URL = 'redis://127.0.0.1:6379/0'
#export DJANGO_SETTINGS_MODULE="settings"
#CELERYD_MULTI="/home/webapps/.virtualenvs/crowdstaff/bin/django-admin.py celeryd_detach"
My /etc/init.d/celeryd file is
#!/bin/sh -e
VERSION=10.1
echo "celery init v${VERSION}."
if [ $(id -u) -ne 0 ]; then
echo "Error: This program can only be used by the root user."
echo " Unprivileged users must use the 'celery multi' utility, "
echo " or 'celery worker --detach'."
exit 1
fi
# Can be a runlevel symlink (e.g. S02celeryd)
if [ -L "$0" ]; then
SCRIPT_FILE=$(readlink "$0")
else
SCRIPT_FILE="$0"
fi
SCRIPT_NAME="$(basename "$SCRIPT_FILE")"
DEFAULT_USER="celery"
DEFAULT_PID_FILE="/var/run/celery/%n.pid"
DEFAULT_LOG_FILE="/var/log/celery/%n.log"
DEFAULT_LOG_LEVEL="INFO"
DEFAULT_NODES="celery"
DEFAULT_CELERYD="-m celery worker --detach"
CELERY_DEFAULTS=${CELERY_DEFAULTS:-"/etc/default/${SCRIPT_NAME}"}
# Make sure executable configuration script is owned by root
_config_sanity() {
local path="$1"
local owner=$(ls -ld "$path" | awk '{print $3}')
local iwgrp=$(ls -ld "$path" | cut -b 6)
local iwoth=$(ls -ld "$path" | cut -b 9)
if [ "$(id -u $owner)" != "0" ]; then
echo "Error: Config script '$path' must be owned by root!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with mailicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change ownership of the script:"
echo " $ sudo chown root '$path'"
exit 1
fi
if [ "$iwoth" != "-" ]; then # S_IWOTH
echo "Error: Config script '$path' cannot be writable by others!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
if [ "$iwgrp" != "-" ]; then # S_IWGRP
echo "Error: Config script '$path' cannot be writable by group!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
}
if [ -f "$CELERY_DEFAULTS" ]; then
_config_sanity "$CELERY_DEFAULTS"
echo "Using config script: $CELERY_DEFAULTS"
. "$CELERY_DEFAULTS"
fi
# Sets --app argument for CELERY_BIN
CELERY_APP_ARG=""
if [ ! -z "$CELERY_APP" ]; then
CELERY_APP_ARG="--app=$CELERY_APP"
fi
CELERYD_USER=${CELERYD_USER:-$DEFAULT_USER}
# Set CELERY_CREATE_DIRS to always create log/pid dirs.
CELERY_CREATE_DIRS=${CELERY_CREATE_DIRS:-0}
CELERY_CREATE_RUNDIR=$CELERY_CREATE_DIRS
CELERY_CREATE_LOGDIR=$CELERY_CREATE_DIRS
if [ -z "$CELERYD_PID_FILE" ]; then
CELERYD_PID_FILE="$DEFAULT_PID_FILE"
CELERY_CREATE_RUNDIR=1
fi
if [ -z "$CELERYD_LOG_FILE" ]; then
CELERYD_LOG_FILE="$DEFAULT_LOG_FILE"
CELERY_CREATE_LOGDIR=1
fi
CELERYD_LOG_LEVEL=${CELERYD_LOG_LEVEL:-${CELERYD_LOGLEVEL:-$DEFAULT_LOG_LEVEL}}
CELERY_BIN=${CELERY_BIN:-"celery"}
CELERYD_MULTI=${CELERYD_MULTI:-"$CELERY_BIN multi"}
CELERYD_NODES=${CELERYD_NODES:-$DEFAULT_NODES}
export CELERY_LOADER
if [ -n "$2" ]; then
CELERYD_OPTS="$CELERYD_OPTS $2"
fi
CELERYD_LOG_DIR=`dirname $CELERYD_LOG_FILE`
CELERYD_PID_DIR=`dirname $CELERYD_PID_FILE`
# Extra start-stop-daemon options, like user/group.
if [ -n "$CELERYD_CHDIR" ]; then
DAEMON_OPTS="$DAEMON_OPTS --workdir=$CELERYD_CHDIR"
fi
check_dev_null() {
if [ ! -c /dev/null ]; then
echo "/dev/null is not a character device!"
exit 75 # EX_TEMPFAIL
fi
}
maybe_die() {
if [ $? -ne 0 ]; then
echo "Exiting: $* (errno $?)"
exit 77 # EX_NOPERM
fi
}
create_default_dir() {
if [ ! -d "$1" ]; then
echo "- Creating default directory: '$1'"
mkdir -p "$1"
maybe_die "Couldn't create directory $1"
echo "- Changing permissions of '$1' to 02755"
chmod 02755 "$1"
maybe_die "Couldn't change permissions for $1"
if [ -n "$CELERYD_USER" ]; then
echo "- Changing owner of '$1' to '$CELERYD_USER'"
chown "$CELERYD_USER" "$1"
maybe_die "Couldn't change owner of $1"
fi
if [ -n "$CELERYD_GROUP" ]; then
echo "- Changing group of '$1' to '$CELERYD_GROUP'"
chgrp "$CELERYD_GROUP" "$1"
maybe_die "Couldn't change group of $1"
fi
fi
}
check_paths() {
if [ $CELERY_CREATE_LOGDIR -eq 1 ]; then
create_default_dir "$CELERYD_LOG_DIR"
fi
if [ $CELERY_CREATE_RUNDIR -eq 1 ]; then
create_default_dir "$CELERYD_PID_DIR"
fi
}
create_paths() {
create_default_dir "$CELERYD_LOG_DIR"
create_default_dir "$CELERYD_PID_DIR"
}
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
_get_pidfiles () {
# note: multi < 3.1.14 output to stderr, not stdout, hence the redirect.
${CELERYD_MULTI} expand "${CELERYD_PID_FILE}" ${CELERYD_NODES} 2>&1
}
_get_pids() {
found_pids=0
my_exitcode=0
for pidfile in $(_get_pidfiles); do
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
my_exitcode=1
else
found_pids=1
echo "$pid"
fi
if [ $found_pids -eq 0 ]; then
echo "${SCRIPT_NAME}: All nodes down"
exit $my_exitcode
fi
done
}
_chuid () {
su "$CELERYD_USER" -c "$CELERYD_MULTI $*"
}
start_workers () {
if [ ! -z "$CELERYD_ULIMIT" ]; then
ulimit $CELERYD_ULIMIT
fi
_chuid $* start $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
dryrun () {
(C_FAKEFORK=1 start_workers --verbose)
}
stop_workers () {
_chuid stopwait $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers () {
_chuid restart $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
kill_workers() {
_chuid kill $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers_graceful () {
echo "WARNING: Use with caution in production"
echo "The workers will attempt to restart, but they may not be able to."
local worker_pids=
worker_pids=`_get_pids`
[ "$one_failed" ] && exit 1
for worker_pid in $worker_pids; do
local failed=
kill -HUP $worker_pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} worker (pid $worker_pid) could not be restarted"
one_failed=true
else
echo "${SCRIPT_NAME} worker (pid $worker_pid) received SIGHUP"
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
check_status () {
my_exitcode=0
found_pids=0
local one_failed=
for pidfile in $(_get_pidfiles); do
if [ ! -r $pidfile ]; then
echo "${SCRIPT_NAME} down: no pidfiles found"
one_failed=true
break
fi
local node=`basename "$pidfile" .pid`
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
else
local failed=
kill -0 $pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} (node $node) (pid $pid) is down, but pidfile exists!"
one_failed=true
else
echo "${SCRIPT_NAME} (node $node) (pid $pid) is up..."
fi
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
case "$1" in
start)
check_dev_null
check_paths
start_workers
;;
stop)
check_dev_null
check_paths
stop_workers
;;
reload|force-reload)
echo "Use restart"
;;
status)
check_status
;;
restart)
check_dev_null
check_paths
restart_workers
;;
graceful)
check_dev_null
restart_workers_graceful
;;
kill)
check_dev_null
kill_workers
;;
dryrun)
check_dev_null
dryrun
;;
try-restart)
check_dev_null
check_paths
restart_workers
;;
create-paths)
check_dev_null
create_paths
;;
check-paths)
check_dev_null
check_paths
;;
*)
echo "Usage: /etc/init.d/${SCRIPT_NAME} {start|stop|restart|graceful|kill|dryrun|create-paths}"
exit 64 # EX_USAGE
;;
esac
exit 0
Which is old, very long, and seems to contain nothing I can change to effect the broker used except the location of the default values script CELERY_DEFAULTS=/etc/default/celeryd (confusing name again). I admit I pretty well copied and pasted this script without a full understanding though I do know how init.d scripts work.
When I run /etc/init.d/celeryd start The workers start up, but ignore the BROKER django settings pointing to my redis server, and try to read RabbitMQ instead. The log file /var/log/celery/w1.log
[2016-11-30 23:44:51,873: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
So celery is trying to use RabbitMQ, not Redis. There are other posts that complain of the same problem on Stack overflow, but none are resolved (as far as I can tell). I put djcelery in installed apps, as it seemed to make celeryd_multi management command available, but I don't want to use celery beat, and the documentation says this is no longer necessary. I have my own queue set up to run management commands, and I have had too many problems with setting up celerybeat in the past.
I have got the thing working by running sudo -u jimmy /srv/intro/bin/celery -A intro worker & and this works and uses the correct Redis queue (does anyone know why is it called a broker?), but wont restart on server power cycle, does not write to the log files, and I just don't feel this is a clean way to run celery workers.
I don't really want to use /etc/init.d scripts as this is the old way of doing things, and running as upstart has come and gone to replace this, and now systemd is the supported way of doing this (please correct me if I am wrong). There is no mention of these methods on the official documentation
http://docs.celeryproject.org/en/v4.0.0/userguide/daemonizing.html#init-script-celeryd
which makes me think that celery is no longer being supported, and perhaps there is a better maintained way of doing this. It is a wonder it has not been built into the core.
I did find this
https://github.com/celery/celery/blob/3.1/extra/supervisord/supervisord.conf
but there is no mention of broker in the config files, and I doubt that this will help me using Redis.
How do I get Celery running as a daemon to start automatically on reboot, and use Redis as a message queue, or is my only way of using Celery for asynchronous running of functions in Django to use the RabbitMQ message queue?
To ensure celery loads the correct broker, add broker parameter to Celery class.
app = Celery('intro', broker=settings.BROKER_URL)
Reference:
http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html#application
I installed redis2.4.14 before.
Yestoday, I got redis2.6.14, and directly "cd redis-2.6.14/src ; make && make install".
and I removed dump.rdb and redis.log of redis-2.4.14.
I also upgraded the configuration file to 2.6.14.
I added redis to service when I installed redis-2.4.14.
I execute command "service redis start", but it always hangs with no "ok" information.
[tys#localhost bin]# service redis start
Starting redis-server:
I can use redis nomally
[tys#localhost redis]# redis-cli
redis 127.0.0.1:6379> set name tys
OK
redis 127.0.0.1:6379> get name
"tys"
but if I type "ctrl + c" or "ctrl + z", "redis-cli" will hang on.
when I reboot the system, linux boot process hangs on "Starting redis-server"
(Sorry, I am too "young" to post image. https://groups.google.com/forum/#!topic/redis-db/iQnlyAAWE9Y)
But I can ssh it.It's a virtual machine.
There is no error in the redis.log.
[1420] 11 Aug 04:27:05.879 # Server started, Redis version 2.6.14
[1420] 11 Aug 04:27:05.880 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
[1420] 11 Aug 04:27:05.903 * DB loaded from disk: 0.023 seconds
[1420] 11 Aug 04:27:05.903 * The server is now ready to accept connections on port 6379
Here is my redis init.d script :
#!/bin/bash
#
#redis - this script starts and stops the redis-server daemon
#
# chkconfig: 235 90 10
# description: Redis is a persistent key-value database
# processname: redis-server
# config: /etc/redis.conf
# config: /etc/sysconfig/redis
# pidfile: /var/run/redis.pid
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
redis="/usr/local/bin/redis-server"
prog=$(basename $redis)
REDIS_CONF_FILE="/etc/redis.conf"
[ -f /etc/sysconfig/redis ] && . /etc/sysconfig/redis
lockfile=/var/lock/subsys/redis
start() {
[ -x $redis ] || exit 5
[ -f $REDIS_CONF_FILE ] || exit 6
echo -n $"Starting $prog: "
daemon $redis $REDIS_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $prog: "
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
stop
start
}
reload() {
echo -n $"Reloading $prog: "
killproc $redis -HUP
RETVAL=$?
echo
}
force_reload() {
restart
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}"
exit 2
esac
I resovled it with Josiah' help in the https://groups.google.com/forum/#!forum/redis-db.
It's "daemonize no" in my redis.conf. Redis started nomally, after I switched to "daemonize yes".
Version 3.0.1 # CentOS 6.6 - the same problem.
Tried with two different init scripts.
'daemonize yes' solves the problem!
Has anyone had any success using start-stop-daemon and mono-service2 together? I've been fighting this for a few days now and have gotten various bits to work, but have had no success in getting a fully functional init script for a mono service.
Here is what I have learned to date:
The mono or mono-service exe must be named as the variable DAEMON (you can't list your exe as the DAEMON)
You must use the --background flag ... otherwise when this script is executed from a package installer (deb in my case). The service terminiates when the installer ends (has something to do with how the installer forks processes ... I havent investigated this much).
I have had success with listing the pid file with the mono-service flag in other scripts and using it to stop the daemon, but for some reason it doesnt work here. As such the script below does not stop the service - not sure why. Start works fine.
And here is my partially functional init script:
#! /bin/sh
### BEGIN INIT INFO
# Provides: ServiceName
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Starts and Stops Service
# Description: Service start|stop|restart
### END INIT INFO
# Author: Author
#
# Do NOT "set -e"
# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="Description of the service"
NAME=Service.exe
DAEMONNAME=ServiceDaemon.sh
INSTALLDIR=/usr/sbin/
DAEMON=/usr/bin/mono-service2
EXENAME=Service.exe
PIDFILE=/var/run/$DAEMONNAME.pid
DAEMON_ARGS=" -l:$PIDFILE $INSTALLDIR/$EXENAME"
#DAEMON_ARGS=" $INSTALLDIR/$EXENAME"
SCRIPTNAME=/etc/init.d/$DAEMONNAME
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
. /lib/lsb/init-functions
#
# Function that starts the daemon/service
#
do_start()
{
# Return
# 0 if daemon has been started
# 1 if daemon was already running
# 2 if daemon could not be started
start-stop-daemon --start --quiet --background --exec $DAEMON --test > /dev/null \
|| return 1
start-stop-daemon --start --quiet --background --exec $DAEMON -- \
$DAEMON_ARGS \
|| return 2
# Add code here, if necessary, that waits for the process to be ready
# to handle requests from services started subsequently which depend
# on this one. As a last resort, sleep for some time.
}
#
# Function that stops the daemon/service
#
do_stop()
{
# Return
# 0 if daemon has been stopped
# 1 if daemon was already stopped
# 2 if daemon could not be stopped
# other if a failure occurred
start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --name $NAME
RETVAL="$?"
[ "$RETVAL" = 2 ] && return 2
# Wait for children to finish too if this is a daemon that forks
# and if the daemon is only ever run from this initscript.
# If the above conditions are not satisfied then add some other code
# that waits for the process to drop all resources that could be
# needed by services started subsequently. A last resort is to
# sleep for some time.
start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON
[ "$?" = 2 ] && return 2
# Many daemons don't delete their pidfiles when they exit.
rm -f $PIDFILE
return "$RETVAL"
}
#
# Function that sends a SIGHUP to the daemon/service
#
do_reload() {
#
# If the daemon can reload its configuration without
# restarting (for example, when it is sent a SIGHUP),
# then implement that here.
#
start-stop-daemon --stop --signal 1 --quiet --pidfile $PIDFILE --name $NAME
return 0
}
case "$1" in
start)
[ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
do_start
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
stop)
[ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
do_stop
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
status)
status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
;;
#reload|force-reload)
#
# If do_reload() is not implemented then leave this commented out
# and leave 'force-reload' as an alias for 'restart'.
#
#log_daemon_msg "Reloading $DESC" "$NAME"
#do_reload
#log_end_msg $?
#;;
restart|force-reload)
#
# If the "reload" option is implemented then remove the
# 'force-reload' alias
#
log_daemon_msg "Restarting $DESC" "$NAME"
do_stop
case "$?" in
0|1)
do_start
case "$?" in
0) log_end_msg 0 ;;
1) log_end_msg 1 ;; # Old process is still running
*) log_end_msg 1 ;; # Failed to start
esac
;;
*)
# Failed to stop
log_end_msg 1
;;
esac
;;
*)
#echo "Usage: $SCRIPTNAME {start|stop|restart|reload|force-reload}" >&2
echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2
exit 3
;;
esac
:
We had a lot of issues with mono-service and ended up implementing our own "service" code in our app. Nothing hard, just grabbing some signals:
UnixSignal intr = new UnixSignal (Signum.SIGINT);
UnixSignal term = new UnixSignal (Signum.SIGTERM);
UnixSignal hup = new UnixSignal (Signum.SIGHUP);
UnixSignal usr2 = new UnixSignal (Signum.SIGUSR2);
UnixSignal[] signals = new UnixSignal[] { intr, term, hup, usr2 };
for (bool running = true; running; )
{
int idx = UnixSignal.WaitAny(signals);
if (idx < 0 || idx >= signals.Length) continue;
log.Debug("daemon: received signal " + signals[idx].Signum.ToString());
if ((intr.IsSet || term.IsSet))
{
intr.Reset ();
term.Reset ();
log.Debug("daemon: stopping...");
running = false;
}
else if (hup.IsSet)
{
// Ignore. Could be used to reload configuration.
hup.Reset();
}
else if (usr2.IsSet)
{
usr2.Reset();
// do something
}
}
I know this question is old but there are no accepted answers. I tinkered around with this for awhile too and came up with a daemon script which worked like a charm for me. I blogged about it here: http://www.geekytidbits.com/start-stop-daemon-with-mono-service2/
I got this script working with a couple of minor changes:
A pidfile in /var/run only works if you run as root - if you try to run the script without sudo, mono-service will fail silently.
Use --pidfile instead of --name to find the service to stop.
do_stop()
{
test -f $PIDFILE && kill `cat $PIDFILE` && return 2
start-stop-daemon --stop --quiet --verbose --oknodo --retry=0/30/KILL/5\
--exec mono-service2
[ "$?" = 2 ] && return 2
# Many daemons don't delete their pidfiles when they exit.
rm -f $PIDFILE
return "$RETVAL"
}
So it works, I think it just because you can't stop a process by the command "start-stop-daemon"
i'm learning to use mono now,your invitation help me very much.thank you.
my english is poor,forgive my half-baked english.