error: Failed to attach device from error: internal error: No more available PCI slots - virtual-machine

Assumptions
Guest VM has been installed.
Guest VM is running: virsh start <vm_domain_name>.
Verify it with virsh list --all
Target PCI devices have been detached from the host with virsh nodedev-detach <pci_<domain>_<bus>_<slot>_<function>>. One can see a list of PCI targets by verifying the output of virsh nodedev-list. One can see that the tag format (<pci__>) may differ from the example here.
Problem Description
error: Failed to attach device from add_pci_vf.xml
error: internal error: No more available PCI slots
This error appears when trying to add multiple PCI devices to a guest virtual machine.
If one just needs to add a single PCI device, then the following command suffice to enable its PCI passthrough:
virsh attach-device ubuntu-guest add_pci_vf.xml --live
The option --live at the end of the command allows the PCI passthrough to be effective immediately. Access the guest and run lspci -nn to assert that the device can now be seen by the guest.
Solution
If one wish to add additional devices, one must replace --live by --config. The changes will not be effective immediately. Follow the recipe below to add multiple PCI devices and avoid the aforementioned error.
virsh attach-device <vm_domain_name> <pci_device_0.xml> --config
virsh attach-device <vm_domain_name> <pci_device_1.xml> --config
virsh attach-device <vm_domain_name> <pci_device_2.xml> --config
virsh destroy <vm_domain_name>
virsh start <vm_domain_name>
The syntax of pci_device_?.xml files should look like the following (may vary depending on the target co-processor):
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio' />
<source>
<address domain='0x0000' bus='0x6b' slot='0x00' function='0x01' />
</source>
</hostdev>
Note: The attribute managed='yes' is essential as per my experience.

The solution described in the original post worked for me. Since I had trouble finding it online, I am posting it here for others facing similar issues.
Summary
Add the devices (VFs) one by one.
virsh attach-device <vm_domain_name> <pci_device_0.xml> --config
virsh attach-device <vm_domain_name> <pci_device_1.xml> --config
virsh attach-device <vm_domain_name> <pci_device_2.xml> --config
Then,
virsh destroy <vm_domain_name>
virsh start <vm_domain_name>
The syntax of pci_device_?.xml files is similar to:
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio' />
<source>
<address domain='0x0000' bus='0x6b' slot='0x00' function='0x01' />
</source>
</hostdev>

Related

Installing Tensorflow GPU/CUDA dependencies on a machine with no internet access

I have 2 machines -
dccten1a with no internet access where I need to install Tensorflow with GPU support
dccten1b with internet access so that I can download packages and transfer to dccten1a
In the final step of installing Tensorflow, when running the bazel build command to produce a whl file, I get an error which says that it can't find a file in a folder it is looking in, and also cannot download, obviously, as 1a doesn't have internet access.
bazel build --config=opt --config=cuda /home/tensorflow/Documents/tf_dependencies/tensorflow-master/tensorflow/tools/pip_package:build_pip_package --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0"
ERROR: error loading package '': Encountered error while reading extension file 'closure/defs.bzl': no such package '#io_bazel_rules_closure//closure': Error downloading [http://bazel-mirror.storage.googleapis.com/github.com/bazelbuild/rules_closure/archive/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz, https://github.com/bazelbuild/rules_closure/archive/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz] to /home/xyzuser/.cache/bazel/_bazel_xyzuser/cb1e63cb5e61cab49a9fd2f5ba92d003/external/io_bazel_rules_closure/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz: All mirrors are down: [Unknown host: github.com, Unknown host: mirror.bazel.build]
I checked in the system, and there is no such directory as shown in the error message (i.e., /home/xyzuser/.cache/bazel/_bazel_xyzuser/cb1e63cb5e61cab49a9fd2f5ba92d003/external/io_bazel_rules_closure/). So, I created it, searched and found the requisite (?) file online, downloaded the file in the machine with internet, transferred it to the target machine, moved the file to the just created directory, and tried running the command again:
(tensorflow#dccten1a):
mkdir -p /home/tensorflow/.cache/bazel/_bazel_tensorflow/cb1e63cb5e61cab49a9fd2f5ba92d003/external/io_bazel_rules_closure
(tensorflow#dccten1b):
http://bazel-mirror.storage.googleapis.com/github.com/bazelbuild/rules_closure/archive/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz
sudo scp -r /home/tensorflow/Downloads/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz tensorflow#160.88.114.17:/home/tensorflow/Documents/tf_dependencies
(tensorflow#dccten1a):
mv /home/tensorflow/Documents/tf_dependencies/5ca1dab6df9ad02050f7ba4e816407f88690cf7d.tar.gz /home/tensorflow/.cache/bazel/_bazel_tensorflow/cb1e63cb5e61cab49a9fd2f5ba92d003/external/io_bazel_rules_closure
Then I run the bazel build command again, but the same error persists.
Use --experimental_repository_cache to download the dependencies on the machine with internet access, transfer the cache to the machine without internet access, and use --experimental_repository_cache to refer to the same cache.
e.g.
1) On the machine with internet access, run
tensorflow#dccten1b $ bazel build --experimental_repository_cache=/path/to/some/folder --config=opt --config=cuda /home/tensorflow/Documents/tf_dependencies/tensorflow-master/tensorflow/tools/pip_package:build_pip_package --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0""
2) Copy the cache at /path/to/some/folder to the machine without internet access using a SD card or flash drive.
3) On the machine without internet access, run the same command again and setting the flag to the cache's location.
tensorflow#dccten1a $ bazel build --experimental_repository_cache=/path/to/some/folder --config=opt --config=cuda /home/tensorflow/Documents/tf_dependencies/tensorflow-master/tensorflow/tools/pip_package:build_pip_package --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0""

Nvidia GPU passthrough fail with code 43

I'm currently trying to pass a nvidia GPU to Windows 10 guest using qemu 2.5 and libvirt 1.3.5.
I see "Error 43" on Nvidia GPU in Device Manager.
I had tried to hide the hypervisor by adding "kvm=off" and "hv_vendor_id=123456780ab", but it does not work for me. I searched in google and people solved the problem in this way.
And I also saw Virtual Machine : Yes in task manager.
Did I use in the wrong way? I can pass a AMD gpu to windows guest(AMD does not check the kvm virtualization).
Can I spoof nvidia in other way?
My system information:
#uname -a
Linux ns.mqcache.net 4.2.0-1.el7.elrepo.x86_64 #1 SMP Sun Aug 30 21:25:29 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux
#/root/qemu25/qemu/x86_64-softmmu/qemu-system-x86_64 --version
QEMU emulator version 2.5.1.1, Copyright (c) 2003-2008 Fabrice Bellard
GPU:
02:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 620 OEM] (rev a1)
02:00.1 Audio device: NVIDIA Corporation GF119 HDMI Audio Controller (rev a1)
libvirt.xml
<domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0" type="kvm">
<name>win10</name>
<os>
<type machine="q35">hvm</type>
<boot dev="hd"/>
<boot dev="cdrom"/>
</os>
<features>
<acpi/>
<apic/>
<hyperv>
<vendor_id state='on' value='1234567890ab'/>
</hyperv>
<kvm>
<hidden state='on'/>
</kvm>
</features>
<clock offset="localtime">
<timer name="rtc" tickpolicy="catchup"/>
<timer name="pit" tickpolicy="delay"/>
<timer name="hpet" present="no"/>
<timer name='hypervclock' present='yes'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<vcpu current="4">4</vcpu>
<cpu mode="host-passthrough">
<topology sockets="1" cores="4" threads="1"/>
</cpu>
<memory>8388608</memory>
<currentMemory>8388608</currentMemory>
<devices>
<emulator>/root/qemu25/qemu/x86_64-softmmu/qemu-system-x86_64</emulator>
<disk device="disk" type="file">
<driver name="qemu" type="qcow2"/>
<source file="/root/vm/win10/image.qcow2"/>
<target bus="virtio" dev="vda"/>
</disk>
<sound model="ac97"/>
<interface type="bridge">
<mac address="fa:16:3e:81:00:03"/>
<source bridge="eucabr"/>
<model type="virtio"/>
<driver name="qemu"/>
<alias name="net0"/>
</interface>
<hostdev mode="subsystem" type="pci" managed="yes">
<source>
<address domain="0x0000" bus="0x02" slot="0x00" function="0x1"/>
</source>
</hostdev>
</devices>
<qemu:commandline>
<qemu:arg value="-machine"/>
<qemu:arg value="smm=off"/>
<qemu:arg value="-device"/>
<qemu:arg value="ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1"/>
<qemu:arg value="-device"/>
<qemu:arg value="vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on"/>
<qemu:arg value="-vga"/>
<qemu:arg value="none"/>
</qemu:commandline>
</domain>
qemu command
/root/qemu25/qemu/x86_64-softmmu/qemu-system-x86_64 \
-name win10 \
-machine q35,accel=kvm,usb=off \
-cpu host,kvm=off,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_vendor_id=blah \
-m 2048 \
-realtime mlock=off \
-smp 2,sockets=1,cores=2,threads=1 \
-no-user-config \
-nodefaults \
-rtc base=localtime \
-no-shutdown \
-boot strict=on \
-device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e \
-device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1 \
-drive file=/root/vm/win10/snap.qcow2,if=none,id=drive-virtio-disk0,format=qcow2 \
-device virtio-blk-pci,scsi=off,bus=pci.2,addr=0x2,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \
-k en-us \
-device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x4 \
-machine smm=off \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \
-msg timestamp=on \
-vga none
Look forward to your help!
You need to pass copy of unmodified videocard ROM to VM.
You need a secondary GPU that you can use as the primary for this
process. You cannot dump a clean copy of the BIOS without having the
passthrough GPU as a secondary card
Put the extra card in the primary slot and the intended passthrough card in another pci-e port and bootup.
Find your intended GPU again via lspci -v. In my case it had about the same address.
Now you can dump the ROM to a file:
# echo "0000:05:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind
# cd /sys/bus/pci/devices/0000\:05\:00.0
# echo 1 > rom
# cat rom > /home/username/KVM/evga_gtx970.dump
# echo 0 > rom
# echo "0000:05:00.0" > /sys/bus/pci/drivers/vfio-pci/bind
In this case, 0000:05:00.0 is my PCI card address. You don't really need the bind step at the bottom since you'll be rebooting anyways.
You can check the integrity of the ROM dump with this handy utility at https://github.com/awilliam/rom-parser. My rom looks like:
# ./rom-parser evga_gtx970.dump
Valid ROM signature found #0h, PCIR offset 1a0h
PCIR: type 0 (x86 PC-AT), vendor: 10de, device: 13c2, class: 030000
PCIR: revision 0, vendor revision: 1
Valid ROM signature found #f400h, PCIR offset 1ch
PCIR: type 3 (EFI), vendor: 10de, device: 13c2, class: 030000
PCIR: revision 3, vendor revision: 0
EFI: Signature Valid, Subsystem: Boot, Machine: X64
Last image
You should have both an EFI and a non-EFI x86 ROM in the dump ( I think most cards have both)
Turn off the machine and put your GTX 1070 back in the primary slot.
After booting, edit your VM xml and in the section for your GPU (if you have already assigned the GPU to the VM) there should be a section. Add a file='path/to/dump/here' statement to it. My full section looks like:
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</source>
<rom bar='on' file='/home/username/KVM/evga_gtx970.dump'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</hostdev>
This will have the VM start the card with that BIOS instead of whatever the kernel gives it.
source
Please note that you have to use OVMF (EFI) because SeaBIOS does not use card ROM properly.
If you're on OVMF or some other UEFI, make sure to triple-check that your card is UEFI-ready, especially on stuff that is older than ~2014.
I was under false impression that mine was (GTX 770), while, in fact, it wasn't (looked at the wrong version of ROM online) and wasted almost 2 days ripping my hair out. Look up UEFI support like so and look for ROM updates here.
I flashed my card, but I think you can put an UEFI-enabled ROM as romfile=. It appears that other manufacturers' ROMs could work too, if yours doesn't have an UEFI fix for you.

Bazel build fails with "Executing genrule #six_archive//:copy_six failed" error while building syntaxnet

I'm trying to follow the instructions at syntaxnet's github page to build syntaxnet parser models.
My system is a Debian Wheezy. Shouldn't be very different from Ubuntu 14.04 LTS or 15.05. I have compiled bazel 0.2.2 (as opposed to 0.2.2b) from source and it appears to work correctly.
Whenever I launch the bazel test syntaxnet/... util/utf8/... command, no tests are executed (all skipped) with some quite cryptic error messages. Here's an example:
root#host:~/tensorflow_syntaxnet/models/syntaxnet# ../../bazel/output/bazel test syntaxnet/... util/utf8/...
Extracting Bazel installation...
.............
INFO: Found 65 targets and 12 test targets...
ERROR: /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/external/six_archive/BUILD:1:1: Executing genrule #six_archive//:copy_six failed: namespace-sandbox failed: error executing command /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox ... (remaining 5 argument(s) skipped).
unshare failed with EINVAL even after 101 tries, giving up.
INFO: Elapsed time: 95.469s, Critical Path: 22.46s
//syntaxnet:arc_standard_transitions_test NO STATUS
//syntaxnet:beam_reader_ops_test NO STATUS
//syntaxnet:graph_builder_test NO STATUS
//syntaxnet:lexicon_builder_test NO STATUS
//syntaxnet:parser_features_test NO STATUS
//syntaxnet:parser_trainer_test NO STATUS
//syntaxnet:reader_ops_test NO STATUS
//syntaxnet:sentence_features_test NO STATUS
//syntaxnet:shared_store_test NO STATUS
//syntaxnet:tagger_transitions_test NO STATUS
//syntaxnet:text_formats_test NO STATUS
//util/utf8:unicodetext_unittest NO STATUS
Executed 0 out of 12 tests: 12 were skipped.
I'm using Oracle Java 8 JDK as recommended, and my compiler is:
~/tensorflow_syntaxnet/models/syntaxnet# gcc --version
gcc (Debian 4.7.2-5) 4.7.2
Copyright (C) 2012 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Tried looking into the namespace-sandbox binary that's mentioned in the error message, but before I dive deep into this, I thought I'd ask here.
~/tensorflow_syntaxnet/models/syntaxnet# ls -l /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox
lrwxrwxrwx 1 root root 108 May 13 14:52 /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox -> /root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox
~/tensorflow_syntaxnet/models/syntaxnet# readlink /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox
/root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox
Command seems to work fine though:
~/tensorflow_syntaxnet/models/syntaxnet# file $(readlink /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox)
/root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[md5/uuid]=0xecfd97b6a6b9a193b045be13654bd55b, not stripped
~/tensorflow_syntaxnet/models/syntaxnet# /root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox
No command specified.
Usage: /root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox [-S sandbox-root] -- command arg1
provided: /root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox
Mandatory arguments:
-S <sandbox-root> directory which will become the root of the sandbox
-- command to run inside sandbox, followed by arguments
Optional arguments:
-W <working-dir> working directory
-T <timeout> timeout after which the child process will be terminated with SIGTERM
-t <timeout> in case timeout occurs, how long to wait before killing the child with SIGKILL
-d <dir> create an empty directory in the sandbox
-M/-m <source/target> system directory to mount inside the sandbox
Multiple directories can be specified and each of them will be mounted readonly.
The -M option specifies which directory to mount, the -m option specifies where to
mount it in the sandbox.
-n if set, a new network namespace will be created
-r if set, make the uid/gid be root, otherwise use nobody
-D if set, debug info will be printed
-l <file> redirect stdout to a file
-L <file> redirect stderr to a file
#FILE read newline-separated arguments from FILE
Any idea?
UPDATE: I have done exactly the same steps on a Ubuntu 14.04 LTS (my small workstation, as opposed to the production server running Debian) and everything works well there, with all tests passing. I wonder what's the difference.
Apparently some permission errors happens when setting up the sandbox. A quick workaround is to deactivate the sandbox by using --genrule_strategy=standalone --spawn_strategy=standalone (note that the second one is already specified in the TensorFlow rc file).
You can set those flag in your ~/.bazelrc:
echo "build --genrule_strategy=standalone --spawn_strategy=standalone" >>~/.bazelrc

cloud VM instance broken packages after updating packages to earlier version

I did a apt-get upgrade because the load times of our production server were about 40 seconds. I don't have a snapshot before nor after the upgrade.(Although there is a snapshot of six months old) Load times improved to 15-ish seconds but our erizo service stopped working. Erizo was also running on that instance. Restarting the services didn't help so I tried upgrading the packages to the previous version (https://askubuntu.com/questions/138284/how-to-downgrade-a-package-via-apt-get), just like it was but on almost every package there was an error: the previous package version did not excist.(which is strange, because I copied the output of dpkg -l)
Only a few of them were successfully downgraded but I got a serious error when upgrading e1fslibs to it's previous version.:The following packages have unmet dependencies:
e2fsprogs: PreDepends: e2fslibs
Somehow that messed up initramfs and/or initramfs-tools and now the instance is running but I can't get into it.
Connecting to the instance in google cloud platform :Connecting...
Could not connect, retrying (1/3).
google cloud shell isn't able to gcloud compute ssh : Permission denied (publickey).
using gcloud locally also says Permission denied (publickey).
I checked the following:
There are project public keys defined; there aren't any instance public keys defined or any other metadata ( Google Cloud SSH Keys )
In google cloud platform >> compute engine >> VM instances >> permissions>> I see 'compute' is disabled
verify that the daemon is running by navigating to the serial console output page and looking for output lines prefixed with the accounts-from-metadata: string. If you are using a standard image but you do not see these output prefixes in the serial console output, the daemon might be stopped--> I don't see this so I expect it's NOT running.
check firewall rules:(gcloud compute firewall-rules list)
default-allow-ssh default 0.0.0.0/0 tcp:22 //rule is present
Following packages were upgraded:
apt
apt-transport-https
apt-utils
binutils
cloud-init
cloud-initramfs-growroot
cloud-initramfs-rescuevol
comerr-dev
dosfstools
e2fslibs
e2fsprogs
gce-cloud-config
gce-daemon
gce-imagebundle
gce-startup-scripts
google-cloud-sdk
landscape-client
landscape-common l
ibapt-inst1.4 libapt-pkg4.12
libcomerr2
libss2
libudev0 mountall
nginx
nginx-common
nginx-full
ntp
ntpdate
procps
python-apt
python-apt-common
python-lazr.restfulclient
udev
unattended-upgrades
update-manager-core
upstart
whoopsie
x11-utils
This is get from the serial output ::
- mountall: Event failed
- landscape-client is not configured, please run landscape-config.
What to do next?
Apply a startup script to running instance (following this https://cloud.google.com/compute/docs/startupscript) and try to perform Apt-get upgrade ?
try to create a new public key (again) in google cloud shell to access the instance?
In google cloud shell the first time this file was generated after typing gcloud compute --project "enduring-palace-762" ssh --zone "europe-west1-c" "tta-media-test-2"
WARNING: The private SSH key file for Google Compute Engine does not exist.WARNING: You do not have an SSH key for Google Compute Engine.WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key. This tool needs to create the directory /home/developer/.ssh
the generated public key was stored in /home/developer/.ssh /google_compute_engine.pub I made a copy of that, prepended the username and added the content of the public key to compute engine >> metadata>>ssh keys. *key is accepted but the username doesn't show like it does with all the other username - key pairs
I get Permission denied (publickey) error though when using gcloud compute ssh tta-media-test-2 --zone europe-west1-c
When I provide the ssh key file like this
gcloud compute ssh tta-media-test-2 --zone europe-west1-c --ssh-key-file=my-ssh-keys_copy.pub (pwd is inside the folder where key file is)
WARNING: The public SSH key file for Google Compute Engine does not exist.
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
I get same result when i generate a new key with ssh-keygen -t rsa -f my-ssh-keys
Any other possible solution would be much appreciated.
[update] I am able to ssh the 'broken' instance from local using ssh user#externalIpOfInstance My plan is to bring it to a upgraded stable state, create a snapshot and see from there..
sudo apt-get -f install
0 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Setting up initramfs-tools (0.99ubuntu13.5) ...
update-initramfs: deferring update (trigger activated)
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-3.13.0-79-generic
E: /usr/share/initramfs-tools/hooks/fixrtc failed with return 1.
update-initramfs: failed for /boot/initrd.img-3.13.0-79-generic with 1.
dpkg: error processing initramfs-tools (--configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)
sudo apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages have been kept back:
google-chrome-stable
The following packages will be upgraded:
comerr-dev libcomerr2 libss2 unattended-upgrades
4 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
1 not fully installed or removed.
Need to get 0 B/188 kB of archives.
After this operation, 4,096 B of additional disk space will be used.
Do you want to continue [Y/n]? y
Preconfiguring packages ...
(Reading database ... 178509 files and directories currently installed.)
Preparing to replace comerr-dev 2.1-1.42-1ubuntu2.2 (using .../comerr-dev_2.1-1.42-1ubuntu2.3_amd64.deb) ...
Unpacking replacement comerr-dev ...
Preparing to replace libcomerr2 1.42-1ubuntu2.2 (using .../libcomerr2_1.42-1ubuntu2.3_amd64.deb) ...
Unpacking replacement libcomerr2 ...
Preparing to replace libss2 1.42-1ubuntu2.2 (using .../libss2_1.42-1ubuntu2.3_amd64.deb) ...
Unpacking replacement libss2 ...
Preparing to replace unattended-upgrades 0.76ubuntu1.1 (using .../unattended-upgrades_0.76ubuntu1.2_all.deb) ...
Unpacking replacement unattended-upgrades ...
Processing triggers for install-info ...
Processing triggers for man-db ...
Processing triggers for ureadahead ...
Setting up initramfs-tools (0.99ubuntu13.5) ...
update-initramfs: deferring update (trigger activated)
Setting up libcomerr2 (1.42-1ubuntu2.3) ...
Setting up comerr-dev (2.1-1.42-1ubuntu2.3) ...
Setting up libss2 (1.42-1ubuntu2.3) ...
Setting up unattended-upgrades (0.76ubuntu1.2) ...
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-3.13.0-79-generic
E: /usr/share/initramfs-tools/hooks/fixrtc failed with return 1.
update-initramfs: failed for /boot/initrd.img-3.13.0-79-generic with 1.
dpkg: error processing initramfs-tools (--configure):
subprocess installed post-installation script returned error exit status 1
No apport report written because MaxReports is reached already
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
Errors were encountered while processing:
initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)
sudo apt-get remove initramfs-tools-bin
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
cron : Depends: adduser but it is not going to be installed
procps : Depends: initscripts
upstart : Depends: initscripts
Depends: mountall
Depends: ifupdown (>= 0.6.10ubuntu5)
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
what to do here?
If you were able to SSH into the instance using a given SSH key before, the most likely reason it would stop working is if you somehow removed that SSH key or if the SSH daemon wasn't running/was otherwise broken. It appears as though in the downgrade you broke this machine.
Why do you need this particular VM instance? Does it have important data? If so, you can shut it off, mount its disk using a fresh VM instance, and copy that data off.
If it runs a service, you should probably cut over to a new machine: even if you're able to get into the instance, there's no telling what still works and what doesn't.
i'm facing issue in bigbluebutton insatllation
Reading state information...
You might want to run 'apt --fix-broken install' to correct these.
The following packages have unmet dependencies:
bigbluebutton : Depends: bbb-config but it is not going to be installed
gce-compute-image-packages : Depends: google-compute-engine but it is not going to be installed
E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).

Boot2Docker Start up fails

I am completely new to docker. I have installed it from Source. I am trying to run it from cmd by using boot2docker start. But i get the following
boot2docker start
Failed to start machine boot2docker-vm <run again with v for details>
boot2docker init
virtual machine boot2docker-vm already exists
boot2docker start
Failed to start machine boot2docker-vm exit status:1
So, For you guys it would be a simple one. I don't know what to do. I tried SO. But i can't able to understand the solution so i failed to achieve it. Please provide some suggestions
EDIT:
I hope it will be helpful. There is some thing disabled in bios.
boot2docker -v start
Boot2Docker-cli version: v1.4.1
Git commit: 43241cb
2014/12/18 16:12:35 executing: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe
showvminfo boot2docker-vm --machinereadable
2014/12/18 16:12:35 executing: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe
guestproperty set boot2docker-vm /VirtualBox/GuestAdd/SharedFolders/MountPrefix
/
2014/12/18 16:12:36 executing: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe
guestproperty set boot2docker-vm /VirtualBox/GuestAdd/SharedFolders/MountDir /
2014/12/18 16:12:36 executing: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe
sharedfolder add boot2docker-vm --name c/Users --hostpath C:\Users --automount
VBoxManage.exe: error: Shared folder named 'c/Users' already exists
VBoxManage.exe: error: Details: code VBOX_E_OBJECT_IN_USE (0x80bb000c), componen
t SessionMachine, interface IMachine, callee IUnknown
VBoxManage.exe: error: Context: "CreateSharedFolder(Bstr(name).raw(), Bstr(hostp
ath).raw(), fWritable, fAutoMount)" at line 1009 of file VBoxManageMisc.cpp
2014/12/18 16:12:36 executing: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe
setextradata boot2docker-vm VBoxInternal2/SharedFoldersEnableSymlinksCreate/ c/U
sers 1
2014/12/18 16:12:36 executing: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe
startvm boot2docker-vm --type headless
Waiting for VM "boot2docker-vm" to power on...
VBoxManage.exe: error: **VT-x is disabled in the BIOS.** (VERR_VMX_MSR_VMXON_DISABLE
D)
VBoxManage.exe: error: D**etails: code E_FAIL (0x80004005)**, component Console, int
erface IConsole
2014/12/18 16:12:38 executing: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe
showvminfo boot2docker-vm --machinereadable
error in run: Failed to start machine "boot2docker-vm": exit status 1
Cause:
The nugget of info you need to pay attention to in your error log is
Waiting for VM "boot2docker-vm" to power on...
VBoxManage.exe: error: **VT-x is disabled in the BIOS.**(VERR_VMX_MSR_VMXON_DISABLE
D)
This means that your current workstation's BIOS settings is preventing virtualization of another operating system on your CPU. Docker runs a virtual linux OS on your machine, so this is the issue.
For Intel chips, virtualization features are usually called VT-x or something like that. For AMD chips, virtualization features are called AMD-V. I use an Intel chip like you though, so this solution pertains to Intel chips.
Solution:
Power down and go into your BIOS and enable VT-x. In my BIOS, a Lenovo Thinkpad T440, the setting that needs to be changed was under Security->Virtualization.
Power up and go back to your C:/path/to/Boot2Docker for Windows folder.
Run boot2docker delete to ensure no VMs are running.
Run boot2docker init to initialize the VM.
Run boot2docker start to create a new Docker Virtual Machine!
To get access to your Docker VM, run boot2docker ssh since the Docker Client doesn't run on Windows as of version 1.5.0.