VMWare Server: Best way to backup images - backup

What is the best way to backup VMWare Servers (1.0.x)?
The virtual machines in question are our development environment, and run isololated from the main network (so you can't just copy data from virtual to real servers).
The image files are normally in use and locked when the server is running, so it is difficult to back these up with the machines running.
Currently: I manually pause the servers when I leave and have a scheduled task that runs at midnight to robocopy the images to a remote NAS.
Is there a better way to do this, ideally without having to remember to pause the virtual machines?

VMWare server includes the command line tool "vmware-cmd", which can be used to perform virtually any operation that can be performed through the console.
In this case you would simply add a "vmware-cmd susepend" to your script before starting your backup, and a "vmware-cmd start" after the backup is completed.
We use vmware-server as part of our build system to provide a known environment to run automated DB upgrades against, so we end up rolling back state as part of each build (driven by CruiseControl), and have found this interface to be rock solid.
Usage: /usr/bin/vmware-cmd <options> <vm-cfg-path> <vm-action> <arguments>
/usr/bin/vmware-cmd -s <options> <server-action> <arguments>
Options:
Connection Options:
-H <host> specifies an alternative host (if set, -U and -P must also be set)
-O <port> specifies an alternative port
-U <username> specifies a user
-P <password> specifies a password
General Options:
-h More detailed help.
-q Quiet. Minimal output
-v Verbose.
Server Operations:
/usr/bin/vmware-cmd -l
/usr/bin/vmware-cmd -s register <config_file_path>
/usr/bin/vmware-cmd -s unregister <config_file_path>
/usr/bin/vmware-cmd -s getresource <variable>
/usr/bin/vmware-cmd -s setresource <variable> <value>
VM Operations:
/usr/bin/vmware-cmd <cfg> getconnectedusers
/usr/bin/vmware-cmd <cfg> getstate
/usr/bin/vmware-cmd <cfg> start <powerop_mode>
/usr/bin/vmware-cmd <cfg> stop <powerop_mode>
/usr/bin/vmware-cmd <cfg> reset <powerop_mode>
/usr/bin/vmware-cmd <cfg> suspend <powerop_mode>
/usr/bin/vmware-cmd <cfg> setconfig <variable> <value>
/usr/bin/vmware-cmd <cfg> getconfig <variable>
/usr/bin/vmware-cmd <cfg> setguestinfo <variable> <value>
/usr/bin/vmware-cmd <cfg> getguestinfo <variable>
/usr/bin/vmware-cmd <cfg> getid
/usr/bin/vmware-cmd <cfg> getpid
/usr/bin/vmware-cmd <cfg> getproductinfo <prodinfo>
/usr/bin/vmware-cmd <cfg> connectdevice <device_name>
/usr/bin/vmware-cmd <cfg> disconnectdevice <device_name>
/usr/bin/vmware-cmd <cfg> getconfigfile
/usr/bin/vmware-cmd <cfg> getheartbeat
/usr/bin/vmware-cmd <cfg> getuptime
/usr/bin/vmware-cmd <cfg> getremoteconnections
/usr/bin/vmware-cmd <cfg> gettoolslastactive
/usr/bin/vmware-cmd <cfg> getresource <variable>
/usr/bin/vmware-cmd <cfg> setresource <variable> <value>
/usr/bin/vmware-cmd <cfg> setrunasuser <username> <password>
/usr/bin/vmware-cmd <cfg> getrunasuser
/usr/bin/vmware-cmd <cfg> getcapabilities
/usr/bin/vmware-cmd <cfg> addredo <disk_device_name>
/usr/bin/vmware-cmd <cfg> commit <disk_device_name> <level> <freeze> <wait>
/usr/bin/vmware-cmd <cfg> answer

Worth looking at rsync? If only part of a large image file is changing then rsync might be the fastest way to copy any changes.

I found an easy to follow guide for backing up VM's in vmware server 2 here: Backup VMware Server 2

If I recall correctly, VMWare Server has a scripting interface, available via Perl or COM. You might be able to use that to automatically pause the VMs before running the backup.
If your backup software was shadow-copy aware, that might work, too.

There is a tool called (ahem) Hobocopy which will copy locked VM files. I would recommend taking a snapshot of the VM and then backing up the VMDK. Then merge the snapshot after the copy is complete.

Related

Split <INCLUDES> variable in CMAKE_[LANGUAGE]_COMPILE_OBJECT call

I got the following variable
set( CMAKE_CA65816_COMPILE_OBJECT "<CMAKE_CA65816_COMPILER> --cpu 65816 -s -o <OBJECT> <SOURCE> -I <INCLUDES>")
This results in this command
ca65 --cpu 65816 -s -o Game.s.o Game.s -I include_dir1 include_dir2 include_dir3
But i need to repeat the -I parameter multiple times like this.
ca65 --cpu 65816 -s -o Game.s.o Game.s -I include_dir1 -I include_dir2 -I include_dir3
Is it possible to split the <INCLUDES> cmake variable?
There is variable CMAKE_INCLUDE_FLAG_<LANG> which denotes a flag to use with includes. You may also want to set CMAKE_INCLUDE_SYSTEM_FLAG_<LANG> variable to flag used with "system" includes (added with SYSTEM keyword).
set(CMAKE_INCLUDE_FLAG_CA65816 "-I")
set(CMAKE_INCLUDE_SYSTEM_FLAG_CA65816 "-Isystem")

SSH into server, sudo later with Phing

We have a server that is set up so you can't log in with root directly. You first log in with a user, then run su and enter the password.
I need to SSH into a server, using phing, and sudo then run a command. I thought if I can get it working just via ssh, I can use that command in an exec task in phing, but can't even get the plain SSH right.
Is this possible?
I've tried the following:
ssh user#server 'su && cd /var/www/clients'
ssh user#server 'su && {{password}} && cd /var/www/clients'
You can use the SshTask and how-to-pass-the-password-to-su-sudo-ssh-without-overriding-the-tty.
<project name="ssh-with-later-sudo" default="run-cmd" basedir=".">
<target name="run-cmd">
<ssh username="user" password="password" host="server" command="echo password | sudo -S cd /var/www/clients" />
</target>
</project>

Mesos-master failed to start when running mesos-start-cluster.sh

When I ran the mesos-start-cluster.sh after configured mesos-master-env and mesos-slave/agent-env files, the display showed normally. The command's output as following:
root#heron02:/home/yitian# ./mesosinstall/sbin/mesos-start-cluster.sh
Starting mesos-master on heron02
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=2 heron02 /home/yitian/mesosinstall/sbin/mesos-daemon.sh mesos-master </dev/null >/dev/null
Starting mesos-agent on heron06
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=2 heron06 /home/yitian/mesosinstall/sbin/mesos-daemon.sh mesos-agent </dev/null >/dev/null
Starting mesos-agent on heron07
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=2 heron07 /home/yitian/mesosinstall/sbin/mesos-daemon.sh mesos-agent </dev/null >/dev/null
Everything's started!
However, when I try to stop the mesos cluster and run the mesos-stop-cluster command, the display showed the error message:
root#heron02:/home/yitian# ./mesosinstall/sbin/mesos-stop-cluster.sh
Stopping mesos-agent on heron06
Stopping mesos-agent on heron07
Stopping mesos-master on heron02
mesos-master: no process found
Everything's stopped!
This means mesos-master start filed. The configuration files is no problem, and two agent hosts start success.
In addition, the mesos-master-env.sh content is:
# Some options you're likely to want to set:
# export MESOS_log_dir=/var/log/mesos
export MESOS_log_dir=/home/yitian/mesosdata/log
export MESOS_work_dir=/home/yitian/mesosdata/data
export MESOS_ZK=zk://heron01:2181/mesos
export MESOS_quorum=1
And the mesos-slave/agent-env.sh content is:
# The mesos master URL to contact. Should be host:port for
# non-ZooKeeper based masters, otherwise a zk:// or file:// URL.
export MESOS_master=heron01:5050
export MESOS_log_dir=/home/yitian/mesosdata/log
export MESOS_work_dir=/home/yitian/mesosdata/run
#export MESOS_isolation=cgroups
The problem is mesos-master process cannot start, and the mesos ui(http://heron01:5050) can't open. What's the matter with it?

docker rabbitmq hostname issue

I am build an image using Dockerfile, and I would like to add users to RabbitMQ right after installation. The problem is that during build hostname of the docker container is different from when I run the resultant image. RabbitMQ loses that user; because of changed hostname it uses another DB.
I connot change /etc/hosts and /etc/hostname files from inside a container, and looks that RabbitMQ is not picking my changes to RABBITMQ_NODENAME and HOSTNAME variables.
The only thing that I found working is running this before starting RabbitMQ broker:
echo "NODENAME=rabbit#localhost" >> /etc/rabbitmq/rabbitmq.conf.d/ewos.conf
But then I will have to run docker image with changed hostname all the time.
docker run -h="localhost" image
Any ideas on what can be done? Maybe the solution is to add users to RabbitMQ not on build but on image run?
Just here is example how to configure from Dockerfile properly:
ENV HOSTNAME localhost
RUN /etc/init.d/rabbitmq-server start ; rabbitmqctl add_vhost /test; /etc/init.d/rabbitmq-server stop
This is remember your config.
Yes, I would suggest to add users when the container runs for the first time.
Instead of starting RabbitMQ directly, you can run a wrapper script that will take care of all the setup, and then start RabbitMQ. If the last step of the wrapper script is a process start, remember that you can use exec so that the new process replaces the script itself.
This is how I did it.
Dockerfile
FROM debian:jessie
MAINTAINER Francesco Casula <fra.casula#gmail.com>
VOLUME ["/var/www"]
WORKDIR /var/www
ENV HOSTNAME my-docker
ENV RABBITMQ_NODENAME rabbit#my-docker
COPY scripts /root/scripts
RUN /bin/bash /root/scripts/os-setup.bash && \
/bin/bash /root/scripts/install-rabbitmq.bash
CMD /etc/init.d/rabbitmq-server start && \
/bin/bash
os-setup.bash
#!/bin/bash
echo "127.0.0.1 localhost" > /etc/hosts
echo "127.0.1.1 my-docker" >> /etc/hosts
echo "my-docker" > /etc/hostname
install-rabbitmq.bash
#!/bin/bash
echo "NODENAME=rabbit#my-docker" > /etc/rabbitmq/rabbitmq-env.conf
echo 'deb http://www.rabbitmq.com/debian/ testing main' | tee /etc/apt/sources.list.d/rabbitmq.list
wget -O- https://www.rabbitmq.com/rabbitmq-release-signing-key.asc | apt-key add -
apt-get update
cd ~
wget https://www.rabbitmq.com/releases/rabbitmq-server/v3.6.5/rabbitmq-server_3.6.5-1_all.deb
dpkg -i rabbitmq-server_3.6.5-1_all.deb
apt-get install -f -y
/etc/init.d/rabbitmq-server start
sleep 3
rabbitmq-plugins enable amqp_client mochiweb rabbitmq_management rabbitmq_management_agent \
rabbitmq_management_visualiser rabbitmq_web_dispatch webmachine
rabbitmqctl delete_user guest
rabbitmqctl add_user bunny password
rabbitmqctl set_user_tags bunny administrator
rabbitmqctl delete_vhost /
rabbitmqctl add_vhost symfony_prod
rabbitmqctl set_permissions -p symfony_prod bunny ".*" ".*" ".*"
rabbitmqctl add_vhost symfony_dev
rabbitmqctl set_permissions -p symfony_dev bunny ".*" ".*" ".*"
rabbitmqctl add_vhost symfony_test
rabbitmqctl set_permissions -p symfony_test bunny ".*" ".*" ".*"
/etc/init.d/rabbitmq-server restart
IS_RABBIT_INSTALLED=`rabbitmqctl status | grep RabbitMQ | grep "3\.6\.5" | wc -l`
if [ "$IS_RABBIT_INSTALLED" = "0" ]; then
exit 1
fi
IS_RABBIT_CONFIGURED=`rabbitmqctl list_users | grep bunny | grep "administrator" | wc -l`
if [ "$IS_RABBIT_CONFIGURED" = "0" ]; then
exit 1
fi
Don't forget to run the container by specifying the right host with the -h flag:
docker run -h my-docker -it --name=my-docker -v $(pwd)/htdocs:/var/www my-docker
The only thing that helped me was to change default value in rabbitmq-env.conf of MNESIA_BASE property to MNESIA_BASE=/data and I added this command RUN mkdir /data in Dockerfile before starting server and add users.

Newline issues with Capistrano and Gitolite

I've setup gitolite with shell access, and using Capistrano to deploy my code to production. The problem is that Capistrano bundles multiple commands in one line, using newlines and gitolite has a security check that looks for newlines, and dies. I'm not sure whether to tackle this from the Capistrano or Gitolite side.
I'm seeing this error from running 'cap deploy'
executing "rm -rf /home/git/public_html/project/releases/20101129165633/log
/home/git/public_html/project/releases/20101129165633/public/system
/home/git/public_html/project/releases/20101129165633/tmp/pids &&\\\n
mkdir -p /home/git/public_html/project/releases/20101129165633/public &&\\\n
mkdir -p /home/git/public_html/project/releases/20101129165633/tmp &&\\\n ln -s /home/git/public_html/project/shared/log /home/git/public_html/project/releases/20101129165633/log &&\\\n ln -s /home/git/public_html/project/shared/system /home/git/public_html/project/releases/20101129165633/public/system &&\\\n ln -s /home/git/public_html/project/shared/pids /home/git/public_html/project/releases/20101129165633/tmp/pids"
servers: ["projectsite.com"]
[projectsite.com] executing command
ERROR MESSAGE:
** [out :: projectsite.com] I don't like newlines in the command: <COMMAND FROM ABOVE>
The gitolite code that handles this is here: https://github.com/sitaramc/gitolite/blob/pu/src/gl-auth-command
You've probably figured this out by now, but not seeing an answer to this made me sad.
Instead of newlines, you can join multiple commands with "; ". Here is an example deploy script:
role :server, "projectsite.com"
namespace :deploy do
desc "Does whatever beeudoublez wants"
task :default, :roles => :server, :except => { :no_release => true } do
run [ "rm -rf /home/git/public_html/project/releases/20101129165633/log /home/git/public_html/project/releases/20101129165633/public/system /home/git/public_html/project/releases/20101129165633/tmp/pids",
"mkdir -p /home/git/public_html/project/releases/20101129165633/public",
"mkdir -p /home/git/public_html/project/releases/20101129165633/public"].join("; ")
end
end