Rsnapshot Issue - backup

I am trying to do some backups with Rsnapshot and am constantly getting this error:
/usr/bin/rsync -av --delete --numeric-ids --relative --delete-excluded \
--stats -L --whole-file --exclude=*/web/ --exclude=*/tmp/ \
--exclude=*/dms/ --exclude=*/Recycle\ Bin/ --exclude=*/app/logs/ \
--exclude=*/app/cache/ --exclude=*/vendor/ --exclude=/var/www/files/ \
--exclude=*/releases/ \
--exclude=/var/www/www.xxx.net/app/var/sessions/ \
--rsync-path=rsync_wrapper.sh --exclude=/var/www/psan-static/ \
--rsh=/usr/bin/ssh -p 9922 backup#xxx.xxx.xxx.xxx:/var/www \
/data-ext/backups/rsnapshot/daily.0/myserver/
Unexpected remote arg: backup#xxx.xxx.xxx.xxx:/var/www
rsync error: syntax or usage error (code 1) at main.c(1348) [sender=3.1.0]
----------------------------------------------------------------------------
rsnapshot encountered an error! The program was invoked with these options:
/usr/bin/rsnapshot daily
----------------------------------------------------------------------------
ERROR: /usr/bin/rsync returned 1 while processing backup#xxx.xxx.xxx.xxx:/var/www/
/usr/bin/logger -i -p user.err -t rsnapshot /usr/bin/rsnapshot daily: \
ERROR: /usr/bin/rsync returned 1 while processing \
backup#xxx.xxx.xxx.xxx:/var/www/
I was trying to play with parameters but cannot figure out what's the issue

The issue was with the
--exclude=*/Recycle\ Bin/
This needs to be quoted as spaces seems not to work.
--exclude="*/Recycle\ Bin/"

Related

Which API can I use to add my own logs to QEMU for debugging purpose

I tried to add my own logs to qemu by using fprintf(stdout, "my own log") and qemu_log("my own log"), and then compiled the qemu from source code and started a VM by the following command:
/usr/bin/qemu-system-x86_64 \
-D /home/VM1-qemu-log.txt \
-d cpu_reset \
-enable-kvm \
-m 4096 \
-nic user,model=virtio \
-drive file=/var/lib/libvirt/images/VM1.qcow2,media=disk,if=virtio \
-nographic
There are CPU-related logs in VM1-qemu-log.txt, however, I cannot find where "my own log" is. Can anyone advise? Thanks!
qemu_log("my own log") works, I added it to the wrong place(i.e., beginning of the 'main()' in 'qemu/vl.c', where the logging has not been setup yet). By adding it to another place(e.g. in virtio_blk_get_request() under qemu/hw/block/virtio-blk.c), I will be able to see "my own log" in /home/VM1-qemu-log.txt. The VM is created by:
/usr/bin/qemu-system-x86_64 \
-D /home/VM1-qemu-log.txt \
-enable-kvm \
-m 4096 \
-nic user,model=virtio \
-drive file=/var/lib/libvirt/images/VM1.qcow2,media=disk,if=virtio \
-nographic

Compiling tensorflow: undefined reference to `clSetUserEventStatus#OPENCL_1.1'

PS: Question is at the end, the following is just background information that might help.
I'm trying to compile tensorflow, but getting this error:
bazel-out/host/bin/_solib_local/_U#local_Uconfig_Usycl_S_Ssycl_Csyclrt___Uexternal_Slocal_Uconfig_Usycl_Ssycl_Slib/libComputeCpp.so:
undefined reference to `clSetUserEventStatus#OPENCL_1.1'
By using "strace" tool, i've isolated the exact statement that's failing to be:
(
cd /home/rh/.cache/bazel/_bazel_rh/81919f16ea125cb9f08993f06569f022/execroot/tensorflow && \
exec env - PATH=/home/rh/.nvm/versions/node/v6.9.5/bin:/home/rh/perl5/bin:/home/rh/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/rh/bin:/usr/local/java/jdk1.8.0_74/bin:/home/rh/.local/bin:/home/rh/myscripts \
/usr/bin/clang++-3.6 \
-o \
bazel-out/host/bin/tensorflow/python/gen_functional_ops_py_wrappers_cc \
-Lbazel-out/host/bin/_solib_local/_U#local_Uconfig_Usycl_S_Ssycl_Csyclrt___Uexternal_Slocal_Uconfig_Usycl_Ssycl_Slib \
-Wl,-rpath,$ORIGIN/../../_solib_local/_U#local_Uconfig_Usycl_S_Ssycl_Csyclrt___Uexternal_Slocal_Uconfig_Usycl_Ssycl_Slib \
-pthread \
-Wl,-no-as-needed \
-B/usr/bin/ \
-Wl,-no-as-needed \
-Wl,--build-id=md5 \
-Wl,--hash-style=gnu \
-Wl,-S \
-Wl,#bazel-out/host/bin/tensorflow/python/gen_functional_ops_py_wrappers_cc-2.params \
-Wl,--no-undefined
)
strace also confirms that this command IS reading /usr/local/computecpp/lib/libComputeCpp.so, which contains the aforementioned clSetUserEventStatus symbol.
I verified that libComputeCpp.so contains clSetUserEventStatus by checking the output of the nm command:
nm /usr/local/computecpp/lib/libComputeCpp.so | grep clSetUserEventStatus
>> U clSetUserEventStatus##OPENCL_1.1
So here is my question: clang (and/or its linker) is reading the file that contains the symbol that it's complaining about... why is it "missing something" and complaining that it's undefined?
How can I get this compile to work?

CaffeOnSpark/Scala--ERROR yarn.ApplicationMaster: User class threw exception: java.lang.NullPointerException

When I train a DNN network using CaffeOnSpark with 2 Spark executors with Ethernet connection, I'm getting an error. I run the job as the example on the https://github.com/yahoo/CaffeOnSpark/wiki/GetStarted_yarn
export SPARK_WORKER_INSTANCES=2
export DEVICES=1
hadoop fs -rm -f hdfs:///mnist.model
hadoop fs -rm -r -f hdfs:///mnist_features_result
spark-submit --master yarn --deploy-mode cluster \
--num-executors 2 \
--files ${CAFFE_ON_SPARK}/data/lenet_memory_solver.prototxt,${CAFFE_ON_SPARK}/data/lenet_memory_train_test.prototxt \
--conf spark.driver.extraLibraryPath="${LD_LIBRARY_PATH}" \
--conf spark.executorEnv.LD_LIBRARY_PATH="${LD_LIBRARY_PATH}" \
--class com.yahoo.ml.caffe.CaffeOnSpark \
${CAFFE_ON_SPARK}/caffe-grid/target/caffe-grid-0.1-SNAPSHOT-jar-with-dependencies.jar \
-train \
-features accuracy,loss -label label \
-conf lenet_memory_solver.prototxt \
-devices 1 \
-connection ethernet \
-model hdfs:///mnist.model \
-output hdfs:///mnist_features_result
Here is the error I'm getting.
When I see the log of the datanode, the error is as below.
Log of datanode
Thank you very much for you answer.

multiple KVM guests script using virt-install

I would like install 3 KVM guests automatically using kickstart.
I have no problem installing it manually using virt-install command.
virt-install \
-n dal \
-r 2048 \
--vcpus=1 \
--os-variant=rhel6 \
--accelerate \
--network bridge:br1,model=virtio \
--disk path=/home/dal_internal,size=128 --force \
--location="/home/kvm.iso" \
--nographics \
--extra-args="ks=file:/dal_kick.cfg console=tty0 console=ttyS0,115200n8 serial" \
--initrd-inject=/opt/dal_kick.cfg \
--virt-type kvm
I have 3 scripts like the one above - i would like to install all 3 at the same time, how can i disable the console? or running it in the background?
Based on virt-install man page:
http://www.tin.org/bin/man.cgi?section=1&topic=virt-install
--noautoconsole
Don't automatically try to connect to the guest console. The
default behaviour is to launch virt-viewer(1) to display the
graphical console, or to run the "virsh" "console" command to
display the text console. Use of this parameter will disable this
behaviour.
virt-install will connect console automatically. If you don't want,
just simply add --noautoconsole in your cmd like
virt-install \
-n dal \
-r 2048 \
--vcpus=1 \
--quiet \
--noautoconsole \
...... other options
We faced the same problem and at the end the only way we found was to create new threads with the &.
We also include the quiet option, not mandatory.
---quiet option (Only print fatal error messages).
virt-install \
-n dal \
-r 2048 \
--vcpus=1 \
--quiet \
--os-variant=rhel6 \
--accelerate \
--network bridge:br1,model=virtio \
--disk path=/home/dal_internal,size=128 --force \
--location="/home/kvm.iso" \
--nographics \
--extra-args="ks=file:/dal_kick.cfg console=tty0 console=ttyS0,115200n8 serial" \
--initrd-inject=/opt/dal_kick.cfg \
--virt-type kvm &
I know this is kind of old, but I wanted to share my thoughts.
I ran into the same problem, but due to the environment we work in, we need to use sudo with a password (compliance reasons). The solution I came up with was to use timeout instead of &. When we fork it right away, it would hang due to the sudo prompt never appearing. So using timeout with your example above: (we obviously did timeout 10 sudo virt-instal...)
timeout 15 virt-install \
-n dal \
-r 2048 \
--vcpus=1 \
--quiet \
--os-variant=rhel6 \
--accelerate \
--network bridge:br1,model=virtio \
--disk path=/home/dal_internal,size=128 --force \
--location="/home/kvm.iso" \
--nographics \
--extra-args="ks=file:/dal_kick.cfg console=tty0 console=ttyS0,115200n8 serial" \
--initrd-inject=/opt/dal_kick.cfg \
--virt-type kvm
This allowed us to interact with our sudo prompt and send the password over, and then start the build. The timeout doesnt kill the process, it will continue on and so can your script.

LDAP is failing 'make test' with Segmentation fault

I am attempting to install LDAP on a virtual host. Following the quickstart guide I am building openldap on CentOS release 5.6 (Final). During configuration it complained about my BerkleyDB. So I built Berkeley DB 4.8.3 and ran
env CPPFLAGS=-I/usr/local/BerkeleyDB.4.8/include LDFLAGS=-L/usr/local/BerkeleyDB.4.8/lib ./configure
Successfully working through the makes I am failing at make test
>>>>> Starting test060-mt-hot for bdb...
running defines.sh
Running slapadd to build slapd database...
Running slapindex to index slapd database...
Starting slapd on TCP/IP port 9011...
/var/www/vhosts/poooblic.com/ldap/openldap-2.4.32/tests/../servers/slapd/slapd -s0 -f /var/www/vhosts/poooblic.com/ldap/openldap-2.4.32/tests/testrun/slapd.1.conf -h ldap://localhost:9011/ -
Testing basic monitor search...
Monitor searches
Testing basic mt-hot search: 1 threads (1 x 50000) loops...
./progs/slapd-mtread -H ldap://localhost:9011/ -D cn=Manager,dc=example,dc=com -w secret -e cn=Monitor -m 1 -L 1 -l 50000
Testing basic mt-hot search: 5 threads (1 x 10000) loops...
./progs/slapd-mtread -H ldap://localhost:9011/ -D cn=Manager,dc=example,dc=com -w secret -e cn=Monitor -m 5 -L 1 -l 10000
Testing basic mt-hot search: 100 threads (5 x 100) loops...
./progs/slapd-mtread -H ldap://localhost:9011/ -D cn=Manager,dc=example,dc=com -w secret -e cn=Monitor -m 100 -L 5 -l 100
Random searches
Testing random mt-hot search: 1 threads (1 x 50000) loops...
./progs/slapd-mtread -H ldap://localhost:9011/ -D cn=Manager,dc=example,dc=com -w secret -e dc=example,dc=com -f (objectclass=*) -m 1 -L 1 -l 50000
Testing random mt-hot search: 5 threads (1 x 10000) loops...
./progs/slapd-mtread -H ldap://localhost:9011/ -D cn=Manager,dc=example,dc=com -w secret -e dc=example,dc=com -f (objectclass=*) -m 5 -L 1 -l 10000
Testing random mt-hot search: 100 threads (5 x 100) loops...
./progs/slapd-mtread -H ldap://localhost:9011/ -D cn=Manager,dc=example,dc=com -w secret -e dc=example,dc=com -f (objectclass=*) -m 100 -L 5 -l 100
./scripts/test060-mt-hot: line 190: 15747 Segmentation fault $SLAPDMTREAD -H $URI1 -D "$MANAGERDN" -w $PASSWD -e "$BASEDN" -f "(objectclass=*)" -m $THR -L $OUTER -l $INNER >> $MTREADOUT 2>&1
slapd-mtread failed (139)!
>>>>> test060-mt-hot failed for bdb
(exit 139)
make[2]: *** [bdb-yes] Error 139
make[2]: Leaving directory `/var/www/vhosts/poooblic.com/ldap/openldap-2.4.32/tests'
make[1]: *** [test] Error 2
make[1]: Leaving directory `/var/www/vhosts/poooblic.com/ldap/openldap-2.4.32/tests'
make: *** [test] Error 2
I am not a sys admin and don't fully conceptualize the build process. What docs can I read to help me understand why it is failing and what can I do to work though it? Thanks!
Please follow below - feedback me incase of any issue.
cd db-4.7.25.NC
cd build_unix/
../dist/configure
make
make install
export CPPFLAGS="-I/usr/local/BerkeleyDB.4.7/include"
export LDFLAGS="-L/usr/local/BerkeleyDB.4.7/lib"
export LD_LIBRARY_PATH=/opt/db-4.7.25.NC/build_unix/.libs ( change this path as per your)
./configure
make depend
make
make test
Regards
Naveen