NoraUi - Scenario usage - noraui

Is it possible to use the same scenario to test different entries, some of them will lead to "right" success and others to "right" failure? In case of right behavior, the test will be OK.
For example, a login connection should succeed with one data entry and should fail with another entry, but the test must end to success.

Yes this sample exist in CI of NoraUi:
https://github.com/NoraUi/NoraUi/blob/master/src/test/resources/data/in/hello.csv
author;zip;city;element;element2;date;title;Result
Jenkins T1;35000;Rennes;smile;smile;16/01/2050;;
Jenkins T2;75000;Paris;smile;smile;;;31
Jenkins T3;56100;Lorient;smile;smile;;;25
Jenkins T4;35000;Rennes;smile;smile;;;
Jenkins T5;35000;Rennes;noExistElement;noExistElement;;;42
Jenkins T6;35000;;smile;smile;;;2
Jenkins T7;35000;Rennes;;;;;4
Jenkins T8;;Rennes;;smile;smile;;58
You find the number of KO step in Result column (input data provider).
For example:
Jenkins T5;35000;Rennes;noExistElement;noExistElement;;;42
step N°42 of https://github.com/NoraUi/NoraUi/blob/master/src/test/resources/steps/hello.feature is KO with noExistElement
When I click on $bakery.DemoPage-<element>
AFTER in your CI:
https://noraui.github.io/#continuousIntegration
travis-ci online sample (unix batch sample use SED): https://github.com/NoraUi/NoraUi/blob/master/test/run.sh
echo "***************************************************"
echo "** Integration tests verification **"
echo "***************************************************"
counters1=$(sed -n 's:.*\[Excel\] > <EXPECTED_RESULTS_1>\(.*\)</EXPECTED_RESULTS_1>.*:\1:p' nonaui.log | head -n 1)
echo "******** $counters1"
nb_counters1=$(sed -n ":;s/$counters1//p;t" nonaui.log | sed -n '$=')
echo "********" found $nb_counters1 times
counters2=$(sed -n 's:.*\[Excel\] > <EXPECTED_RESULTS_2>\(.*\)</EXPECTED_RESULTS_2>.*:\1:p' nonaui.log | head -n 1)
echo "******** $counters2"
nb_counters2=$(sed -n ":;s/$counters2//p;t" nonaui.log | sed -n '$=')
echo "******** found $nb_counters2 times"
# 3 = 1 (real) + 2 counters (Excel and CSV)
if [ "$nb_counters1" == "3" ] && [ "$nb_counters2" == "3" ]; then
echo "******** All counter is SUCCESS"
else
echo "******** All counter is FAIL"
echo "$counters1 found $nb_counters1 times"
echo "$counters2 found $nb_counters2 times"
pwd
ls -l
cat target/reports/html/index.html
exit 255
fi
echo "***************************************************"
echo "** Unit tests verification **"
echo "***************************************************"
counterFailures=$(sed -n 's/.*\[\(.*\)\] Tests run: \(.*\), Failures: \([1-9]\), Errors: \(.*\), Skipped: \(.*\), Time elapsed.*UT.*/\3/p' nonaui.log | head -n 1)
echo "******** counter Failures: $counterFailures"
counterErrors=$(sed -n 's/.*\[\(.*\)\] Tests run: \(.*\), Failures: \(.*\), Errors: \([1-9]\), Skipped: \(.*\), Time elapsed.*UT.*/\4/p' nonaui.log | head -n 1)
echo "******** counter Errors: $counterErrors"
if [ "$counterFailures" == "" ] && [ "$counterErrors" == "" ]; then
echo "******** All unit test are SUCCESS"
else
if [ "$counterFailures" != "" ]; then
echo "******** At least one unit test is Failure"
fi
if [ "$counterErrors" != "" ]; then
echo "******** At least one unit test is Error"
fi
exit 255
fi

Related

How to process a command with if condition in background in command line on linux

I want to run the following command in background:
if [ ! -e test.txt ]; then echo test; else echo test1 && echo test2; fi;
I tried:
if [ ! -e test.txt ]; then echo test; else echo test1 && echo test2 > /dev/null 2>&1 &; fi;
But it gives me the error: -bash: syntax error near unexpected token ;'`
I also tried:
if [ ! -e test.txt ]; then echo test; else echo test1 && echo test2; fi; > /dev/null 2>&1 &
It worked but not in the background.
Is there any method to do that? Thanks in advance!
I tried to use:
if [ ! -e test.txt ]; then echo test; else echo test1 && echo test2; fi > /dev/null 2>/dev/null &
And it seems to work now
you can make a shell script named hello.sh and run it as ./hello.sh & hope this helps.

Running Celery as daemon won't use redis broker

I have
celery==3.1.23
Django==1.9.1
redis==2.10.5
ii redis-server 2:2.8.19-3 amd64 Persistent key-value database with networ
ii redis-tools 2:2.8.19-3 amd64 Persistent key-value database with networ
My configuration settings have the lines
# Celery
BROKER_URL = 'redis://127.0.0.1:6379/0'
BROKER_TRANSPORT = 'redis'
# start worker with '$ celery -A intro worker -l debug'
and my configuration file celery.py (standard practice is to name it this way, but confusing in my opinion) is
from __future__ import absolute_import
import os
import django
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'intro.settings')
django.setup()
app = Celery('intro')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
The config /etc/default/celeryd (also confusing naming) is
# copy this file to /etc/default/celeryd
CELERYD_NODES="w1 w2 w3"
VIRTUAL_ENV_PATH="/srv/intro/bin"
# JRT
CELERY_BIN="${VIRTUAL_ENV_PATH}/celery"
# Where to chdir at start.
CELERYD_CHDIR="/srv/intro/intro"
# Python interpreter from environment.
ENV_PYTHON="$VIRTUAL_ENV_PATH/python"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
# How to call "manage.py celeryctl"
CELERYCTL="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryctl"
# Extra arguments to celeryd NOTE --beat is vital, otherwise scheduler
# will not run
CELERYD_OPTS="--concurrency=1 --beat"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="jimmy"
CELERYD_GROUP="jimmy"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="intro.settings"
#CELERY_BROKER_URL = 'redis://127.0.0.1:6379/0'
#export DJANGO_SETTINGS_MODULE="settings"
#CELERYD_MULTI="/home/webapps/.virtualenvs/crowdstaff/bin/django-admin.py celeryd_detach"
My /etc/init.d/celeryd file is
#!/bin/sh -e
VERSION=10.1
echo "celery init v${VERSION}."
if [ $(id -u) -ne 0 ]; then
echo "Error: This program can only be used by the root user."
echo " Unprivileged users must use the 'celery multi' utility, "
echo " or 'celery worker --detach'."
exit 1
fi
# Can be a runlevel symlink (e.g. S02celeryd)
if [ -L "$0" ]; then
SCRIPT_FILE=$(readlink "$0")
else
SCRIPT_FILE="$0"
fi
SCRIPT_NAME="$(basename "$SCRIPT_FILE")"
DEFAULT_USER="celery"
DEFAULT_PID_FILE="/var/run/celery/%n.pid"
DEFAULT_LOG_FILE="/var/log/celery/%n.log"
DEFAULT_LOG_LEVEL="INFO"
DEFAULT_NODES="celery"
DEFAULT_CELERYD="-m celery worker --detach"
CELERY_DEFAULTS=${CELERY_DEFAULTS:-"/etc/default/${SCRIPT_NAME}"}
# Make sure executable configuration script is owned by root
_config_sanity() {
local path="$1"
local owner=$(ls -ld "$path" | awk '{print $3}')
local iwgrp=$(ls -ld "$path" | cut -b 6)
local iwoth=$(ls -ld "$path" | cut -b 9)
if [ "$(id -u $owner)" != "0" ]; then
echo "Error: Config script '$path' must be owned by root!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with mailicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change ownership of the script:"
echo " $ sudo chown root '$path'"
exit 1
fi
if [ "$iwoth" != "-" ]; then # S_IWOTH
echo "Error: Config script '$path' cannot be writable by others!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
if [ "$iwgrp" != "-" ]; then # S_IWGRP
echo "Error: Config script '$path' cannot be writable by group!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
}
if [ -f "$CELERY_DEFAULTS" ]; then
_config_sanity "$CELERY_DEFAULTS"
echo "Using config script: $CELERY_DEFAULTS"
. "$CELERY_DEFAULTS"
fi
# Sets --app argument for CELERY_BIN
CELERY_APP_ARG=""
if [ ! -z "$CELERY_APP" ]; then
CELERY_APP_ARG="--app=$CELERY_APP"
fi
CELERYD_USER=${CELERYD_USER:-$DEFAULT_USER}
# Set CELERY_CREATE_DIRS to always create log/pid dirs.
CELERY_CREATE_DIRS=${CELERY_CREATE_DIRS:-0}
CELERY_CREATE_RUNDIR=$CELERY_CREATE_DIRS
CELERY_CREATE_LOGDIR=$CELERY_CREATE_DIRS
if [ -z "$CELERYD_PID_FILE" ]; then
CELERYD_PID_FILE="$DEFAULT_PID_FILE"
CELERY_CREATE_RUNDIR=1
fi
if [ -z "$CELERYD_LOG_FILE" ]; then
CELERYD_LOG_FILE="$DEFAULT_LOG_FILE"
CELERY_CREATE_LOGDIR=1
fi
CELERYD_LOG_LEVEL=${CELERYD_LOG_LEVEL:-${CELERYD_LOGLEVEL:-$DEFAULT_LOG_LEVEL}}
CELERY_BIN=${CELERY_BIN:-"celery"}
CELERYD_MULTI=${CELERYD_MULTI:-"$CELERY_BIN multi"}
CELERYD_NODES=${CELERYD_NODES:-$DEFAULT_NODES}
export CELERY_LOADER
if [ -n "$2" ]; then
CELERYD_OPTS="$CELERYD_OPTS $2"
fi
CELERYD_LOG_DIR=`dirname $CELERYD_LOG_FILE`
CELERYD_PID_DIR=`dirname $CELERYD_PID_FILE`
# Extra start-stop-daemon options, like user/group.
if [ -n "$CELERYD_CHDIR" ]; then
DAEMON_OPTS="$DAEMON_OPTS --workdir=$CELERYD_CHDIR"
fi
check_dev_null() {
if [ ! -c /dev/null ]; then
echo "/dev/null is not a character device!"
exit 75 # EX_TEMPFAIL
fi
}
maybe_die() {
if [ $? -ne 0 ]; then
echo "Exiting: $* (errno $?)"
exit 77 # EX_NOPERM
fi
}
create_default_dir() {
if [ ! -d "$1" ]; then
echo "- Creating default directory: '$1'"
mkdir -p "$1"
maybe_die "Couldn't create directory $1"
echo "- Changing permissions of '$1' to 02755"
chmod 02755 "$1"
maybe_die "Couldn't change permissions for $1"
if [ -n "$CELERYD_USER" ]; then
echo "- Changing owner of '$1' to '$CELERYD_USER'"
chown "$CELERYD_USER" "$1"
maybe_die "Couldn't change owner of $1"
fi
if [ -n "$CELERYD_GROUP" ]; then
echo "- Changing group of '$1' to '$CELERYD_GROUP'"
chgrp "$CELERYD_GROUP" "$1"
maybe_die "Couldn't change group of $1"
fi
fi
}
check_paths() {
if [ $CELERY_CREATE_LOGDIR -eq 1 ]; then
create_default_dir "$CELERYD_LOG_DIR"
fi
if [ $CELERY_CREATE_RUNDIR -eq 1 ]; then
create_default_dir "$CELERYD_PID_DIR"
fi
}
create_paths() {
create_default_dir "$CELERYD_LOG_DIR"
create_default_dir "$CELERYD_PID_DIR"
}
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
_get_pidfiles () {
# note: multi < 3.1.14 output to stderr, not stdout, hence the redirect.
${CELERYD_MULTI} expand "${CELERYD_PID_FILE}" ${CELERYD_NODES} 2>&1
}
_get_pids() {
found_pids=0
my_exitcode=0
for pidfile in $(_get_pidfiles); do
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
my_exitcode=1
else
found_pids=1
echo "$pid"
fi
if [ $found_pids -eq 0 ]; then
echo "${SCRIPT_NAME}: All nodes down"
exit $my_exitcode
fi
done
}
_chuid () {
su "$CELERYD_USER" -c "$CELERYD_MULTI $*"
}
start_workers () {
if [ ! -z "$CELERYD_ULIMIT" ]; then
ulimit $CELERYD_ULIMIT
fi
_chuid $* start $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
dryrun () {
(C_FAKEFORK=1 start_workers --verbose)
}
stop_workers () {
_chuid stopwait $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers () {
_chuid restart $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
kill_workers() {
_chuid kill $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers_graceful () {
echo "WARNING: Use with caution in production"
echo "The workers will attempt to restart, but they may not be able to."
local worker_pids=
worker_pids=`_get_pids`
[ "$one_failed" ] && exit 1
for worker_pid in $worker_pids; do
local failed=
kill -HUP $worker_pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} worker (pid $worker_pid) could not be restarted"
one_failed=true
else
echo "${SCRIPT_NAME} worker (pid $worker_pid) received SIGHUP"
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
check_status () {
my_exitcode=0
found_pids=0
local one_failed=
for pidfile in $(_get_pidfiles); do
if [ ! -r $pidfile ]; then
echo "${SCRIPT_NAME} down: no pidfiles found"
one_failed=true
break
fi
local node=`basename "$pidfile" .pid`
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
else
local failed=
kill -0 $pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} (node $node) (pid $pid) is down, but pidfile exists!"
one_failed=true
else
echo "${SCRIPT_NAME} (node $node) (pid $pid) is up..."
fi
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
case "$1" in
start)
check_dev_null
check_paths
start_workers
;;
stop)
check_dev_null
check_paths
stop_workers
;;
reload|force-reload)
echo "Use restart"
;;
status)
check_status
;;
restart)
check_dev_null
check_paths
restart_workers
;;
graceful)
check_dev_null
restart_workers_graceful
;;
kill)
check_dev_null
kill_workers
;;
dryrun)
check_dev_null
dryrun
;;
try-restart)
check_dev_null
check_paths
restart_workers
;;
create-paths)
check_dev_null
create_paths
;;
check-paths)
check_dev_null
check_paths
;;
*)
echo "Usage: /etc/init.d/${SCRIPT_NAME} {start|stop|restart|graceful|kill|dryrun|create-paths}"
exit 64 # EX_USAGE
;;
esac
exit 0
Which is old, very long, and seems to contain nothing I can change to effect the broker used except the location of the default values script CELERY_DEFAULTS=/etc/default/celeryd (confusing name again). I admit I pretty well copied and pasted this script without a full understanding though I do know how init.d scripts work.
When I run /etc/init.d/celeryd start The workers start up, but ignore the BROKER django settings pointing to my redis server, and try to read RabbitMQ instead. The log file /var/log/celery/w1.log
[2016-11-30 23:44:51,873: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
So celery is trying to use RabbitMQ, not Redis. There are other posts that complain of the same problem on Stack overflow, but none are resolved (as far as I can tell). I put djcelery in installed apps, as it seemed to make celeryd_multi management command available, but I don't want to use celery beat, and the documentation says this is no longer necessary. I have my own queue set up to run management commands, and I have had too many problems with setting up celerybeat in the past.
I have got the thing working by running sudo -u jimmy /srv/intro/bin/celery -A intro worker & and this works and uses the correct Redis queue (does anyone know why is it called a broker?), but wont restart on server power cycle, does not write to the log files, and I just don't feel this is a clean way to run celery workers.
I don't really want to use /etc/init.d scripts as this is the old way of doing things, and running as upstart has come and gone to replace this, and now systemd is the supported way of doing this (please correct me if I am wrong). There is no mention of these methods on the official documentation
http://docs.celeryproject.org/en/v4.0.0/userguide/daemonizing.html#init-script-celeryd
which makes me think that celery is no longer being supported, and perhaps there is a better maintained way of doing this. It is a wonder it has not been built into the core.
I did find this
https://github.com/celery/celery/blob/3.1/extra/supervisord/supervisord.conf
but there is no mention of broker in the config files, and I doubt that this will help me using Redis.
How do I get Celery running as a daemon to start automatically on reboot, and use Redis as a message queue, or is my only way of using Celery for asynchronous running of functions in Django to use the RabbitMQ message queue?
To ensure celery loads the correct broker, add broker parameter to Celery class.
app = Celery('intro', broker=settings.BROKER_URL)
Reference:
http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html#application

How do I check if xml-rpc are working on OpenERP 7?

I'm trying to figure out if my server is accepting or not xmlrpc request. What I know is that when I try to access http://my_ip:8069/xmlrpc/common or http://my_ip:8069/xmlrpc/object on my browser I get a "File not found error". Is that what is expected?
Do I have to start my server with any flag to start xml-rpc suport?
Here is my openerp-server.conf
admin_passwd = my_pass
db_host = False
db_port = False
db_user = openerp
db_password = False
addons_path = /opt/openerp/v7/addons,/opt/openerp/v7/web/addons,/home/lfc/openerp/v7/addons
;Log settings
logfile = /var/log/openerp/openerp-server.log
; log_level = error
log_level = debug
And my starting script:
#!/bin/sh
### BEGIN INIT INFO
# Provides: openerp-server
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Should-Start: $network
# Should-Stop: $network
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Enterprise Resource Management software
# Description: Open ERP is a complete ERP and CRM software.
### END INIT INFO
PATH=/bin:/sbin:/usr/bin
DAEMON=/opt/openerp/v7/server/openerp-server
NAME=openerp-server
DESC=openerp-server
# Specify the user name (Default: openerp).
USER=openerp
# Specify an alternate config file (Default: /etc/openerp-server.conf).
CONFIGFILE="/etc/openerp-server.conf"
# pidfile
PIDFILE=/var/run/$NAME.pid
# Additional options that are passed to the Daemon.
DAEMON_OPTS="-c $CONFIGFILE"
# Added option to debug
#DAEMON_OPTS="-c $CONFIGFILE --debug --log-level=debug"
# Log File
LOGFILE=/var/log/openerp/openerp-server.log
[ -x $DAEMON ] || exit 0
[ -f $CONFIGFILE ] || exit 0
checkpid() {
[ -f $PIDFILE ] || return 1
pid=`cat $PIDFILE`
[ -d /proc/$pid ] && return 0
return 1
}
case "${1}" in
start)
#Check if deamon is already running
checkpid
if [ "$pid" != "" ]; then
echo "$NAME already running with pid $pid."
else
#Only starts if not running
echo -n "Starting ${DESC}: "
start-stop-daemon --start --quiet --pidfile ${PIDFILE} \
--chuid ${USER} --background --make-pidfile \
--exec ${DAEMON} -- ${DAEMON_OPTS} --logfile=${LOGFILE}
echo "${NAME}, OK."
fi
;;
stop)
echo -n "Stopping ${DESC}: "
start-stop-daemon --stop --quiet --pidfile ${PIDFILE} \
--oknodo
#Check existence of PID file. If exists, then remove it
checkpid
if [ "$pid" != "" ]; then
`rm ${PIDFILE}`
echo -n "${PIDFILE} removed "
fi
echo "${NAME}, Stoped!"
;;
restart|force-reload)
echo -n "Restarting ${DESC}: "
start-stop-daemon --stop --quiet --pidfile ${PIDFILE} \
--oknodo
echo -n "\n${NAME}: STOPPED"
sleep 1
#Check existence of PID file. If exists, then remove it
checkpid
if [ "$pid" != "" ]; then
`rm ${PIDFILE}`
echo -n "\n${PIDFILE} REMOVED "
fi
sleep 1
start-stop-daemon --start --quiet --pidfile ${PIDFILE} \
--chuid ${USER} --background --make-pidfile \
--exec ${DAEMON} -- ${DAEMON_OPTS}
echo "\n${NAME}: STARTED\n"
;;
*)
N=/etc/init.d/${NAME}
echo "Usage: ${NAME} {start|stop|restart|force-reload}" >&2
exit 1
;;
esac
exit 0
You have to access it with valid data.
Common (xmlrpc/common) is for connection purposes and Object (xmlrpc/object) is to access tables and functions (modules).
For that you can create an api using Python, PHP, Java, etc
For example check this link
To check it You may use openerp-proxy project, which can be used as lib or CLI client to OpenERP / Odoo using xml-rpc or json-rpc.
So to test your connection:
$ pip install openerp_proxy # also install IPython if you like it
$ openerp_proxy # this will openen python/IPython shell
>>> db = session.connect() # This will ask you required info for connection
>>> db.registered_objects # this should return list of registered Openerp/Odoo models
Hope this will help

how to start jboss in background

i'm using centos 6 and jboss 7.1.1 final. i using this command to start jboss:./domain.sh,when it start,it output a lot of logs,and the worst thing is just when i disconnect the ssh,the server is down.i've already tried another command ./domain.sh & ,nothing changes.
in the domain.sh,it says,
if [ "x$LAUNCH_JBOSS_IN_BACKGROUND" = "x" ]; then
# Execute the JVM in the foreground
eval \"$JAVA\" -D\"[Process Controller]\" $PROCESS_CONTROLLER_JAVA_OPTS \
\"-Dorg.jboss.boot.log.file=$JBOSS_LOG_DIR/process-controller.log\" \
\"-Dlogging.configuration=file:$JBOSS_CONFIG_DIR/logging.properties\" \
-jar \"$JBOSS_HOME/jboss-modules.jar\" \
-mp \"${JBOSS_MODULEPATH}\" \
org.jboss.as.process-controller \
-jboss-home \"$JBOSS_HOME\" \
-jvm \"$JAVA_FROM_JVM\" \
-mp \"${JBOSS_MODULEPATH}\" \
-- \
\"-Dorg.jboss.boot.log.file=$JBOSS_LOG_DIR/host-controller.log\" \
\"-Dlogging.configuration=file:$JBOSS_CONFIG_DIR/logging.properties\" \
$HOST_CONTROLLER_JAVA_OPTS \
-- \
-default-jvm \"$JAVA_FROM_JVM\" \
"$#"
JBOSS_STATUS=$?
else
# Execute the JVM in the background
eval \"$JAVA\" -D\"[Process Controller]\" $PROCESS_CONTROLLER_JAVA_OPTS \
\"-Dorg.jboss.boot.log.file=$JBOSS_LOG_DIR/process-controller.log\" \
\"-Dlogging.configuration=file:$JBOSS_CONFIG_DIR/logging.properties\" \
-jar \"$JBOSS_HOME/jboss-modules.jar\" \
-mp \"${JBOSS_MODULEPATH}\" \
org.jboss.as.process-controller \
-jboss-home \"$JBOSS_HOME\" \
-jvm \"$JAVA_FROM_JVM\" \
-mp \"${JBOSS_MODULEPATH}\" \
-- \
\"-Dorg.jboss.boot.log.file=$JBOSS_LOG_DIR/host-controller.log\" \
\"-Dlogging.configuration=file:$JBOSS_CONFIG_DIR/logging.properties\" \
$HOST_CONTROLLER_JAVA_OPTS \
-- \
-default-jvm \"$JAVA_FROM_JVM\" \
"$#" "&"
JBOSS_PID=$!
# Trap common signals and relay them to the jboss process
trap "kill -HUP $JBOSS_PID" HUP
trap "kill -TERM $JBOSS_PID" INT
trap "kill -QUIT $JBOSS_PID" QUIT
trap "kill -PIPE $JBOSS_PID" PIPE
trap "kill -TERM $JBOSS_PID" TERM
if [ "x$JBOSS_PIDFILE" != "x" ]; then
echo $JBOSS_PID > $JBOSS_PIDFILE
fi
# Wait until the background process exits
WAIT_STATUS=128
while [ "$WAIT_STATUS" -ge 128 ]; do
wait $JBOSS_PID 2>/dev/null
WAIT_STATUS=$?
if [ "$WAIT_STATUS" -gt 128 ]; then
SIGNAL=`expr $WAIT_STATUS - 128`
SIGNAL_NAME=`kill -l $SIGNAL`
echo "*** JBossAS process ($JBOSS_PID) received $SIGNAL_NAME signal ***" >&2
fi
done
if [ "$WAIT_STATUS" -lt 127 ]; then
JBOSS_STATUS=$WAIT_STATUS
else
JBOSS_STATUS=0
fi
if [ "$JBOSS_STATUS" -ne 10 ]; then
# Wait for a complete shudown
wait $JBOSS_PID 2>/dev/null
fi
if [ "x$JBOSS_PIDFILE" != "x" ]; then
grep "$JBOSS_PID" $JBOSS_PIDFILE && rm $JBOSS_PIDFILE
fi
fi
if [ "$JBOSS_STATUS" -eq 10 ]; then
echo "Restarting JBoss..."
else
exit $JBOSS_STATUS
fi
i tried to remove the if clause,but there is no miracle
or how to install the jboss 7 as service.thanks a lot.
Here comes the implementation of Xientd service. Try to run the script as a daemon/service. Try to add your own service with Xientd. Once you have started or triggered the service, It will run in the server even you have closed the SSH. See the followings links to get further awareness to set up your own service.
link1
link2
link3
These are comes with pure Linux administration stuffs but easy create your own. You should select a port dedicated for this service and make sure to open this port via firewall.
I hope this will help you

upgrade redis 2.4.14 to redis 2.6.14, command "service redis start" always hangs

I installed redis2.4.14 before.
Yestoday, I got redis2.6.14, and directly "cd redis-2.6.14/src ; make && make install".
and I removed dump.rdb and redis.log of redis-2.4.14.
I also upgraded the configuration file to 2.6.14.
I added redis to service when I installed redis-2.4.14.
I execute command "service redis start", but it always hangs with no "ok" information.
[tys#localhost bin]# service redis start
Starting redis-server:
I can use redis nomally
[tys#localhost redis]# redis-cli
redis 127.0.0.1:6379> set name tys
OK
redis 127.0.0.1:6379> get name
"tys"
but if I type "ctrl + c" or "ctrl + z", "redis-cli" will hang on.
when I reboot the system, linux boot process hangs on "Starting redis-server"
(Sorry, I am too "young" to post image. https://groups.google.com/forum/#!topic/redis-db/iQnlyAAWE9Y)
But I can ssh it.It's a virtual machine.
There is no error in the redis.log.
[1420] 11 Aug 04:27:05.879 # Server started, Redis version 2.6.14
[1420] 11 Aug 04:27:05.880 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
[1420] 11 Aug 04:27:05.903 * DB loaded from disk: 0.023 seconds
[1420] 11 Aug 04:27:05.903 * The server is now ready to accept connections on port 6379
Here is my redis init.d script :
#!/bin/bash
#
#redis - this script starts and stops the redis-server daemon
#
# chkconfig: 235 90 10
# description: Redis is a persistent key-value database
# processname: redis-server
# config: /etc/redis.conf
# config: /etc/sysconfig/redis
# pidfile: /var/run/redis.pid
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
redis="/usr/local/bin/redis-server"
prog=$(basename $redis)
REDIS_CONF_FILE="/etc/redis.conf"
[ -f /etc/sysconfig/redis ] && . /etc/sysconfig/redis
lockfile=/var/lock/subsys/redis
start() {
[ -x $redis ] || exit 5
[ -f $REDIS_CONF_FILE ] || exit 6
echo -n $"Starting $prog: "
daemon $redis $REDIS_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $prog: "
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
stop
start
}
reload() {
echo -n $"Reloading $prog: "
killproc $redis -HUP
RETVAL=$?
echo
}
force_reload() {
restart
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}"
exit 2
esac
I resovled it with Josiah' help in the https://groups.google.com/forum/#!forum/redis-db.
It's "daemonize no" in my redis.conf. Redis started nomally, after I switched to "daemonize yes".
Version 3.0.1 # CentOS 6.6 - the same problem.
Tried with two different init scripts.
'daemonize yes' solves the problem!