Delete a saved & locked zombie vm with vboxmanage - virtual-machine

Using Laravel Homestead and have my first encounter with a wild zombie vm I can't destroy.
$Vagrant global-status
...Nothing
$vboxmanage list vms
..."homestead-7" {a00faea2-9b77-466d-beac-ab892dc0f0c9}
$vboxmanage unregistervm homestead-7 --delete
...VBoxManage: error: Cannot unregister the machine 'homestead-7' while it is locked
$vboxmanage startvm homestead-7
...VBoxManage: error: The machine 'homestead-7' is already locked by a session (or being locked or unlocked)
VBoxManage: error: Details: code VBOX_E_INVALID_OBJECT_STATE (0x80bb0007), component MachineWrap, interface IMachine, callee nsISupports
VBoxManage: error: Context: "LaunchVMProcess(a->session, sessionType.raw(), env.raw(), progress.asOutParam())" at line 589 of file VBoxManageMisc.cpp

Kill the VBoxHeadless process and run vboxmanage unregistervm homestead-7 --delete.
Similar SO question & answer found.

Related

dfx start error "thread 'main' panicked at 'Error creating persistent pool at: ... "

I built a new Motoko (or ICP) project using dfx command. For the first time, it started and deployed successfully.
Then I suddenly closed the first terminal which I ran the dfx start command. Then try to restart it again but this error message is appeared:
hread 'main' panicked at 'Error creating persistent pool at: "/root/.local/share/dfx/network/local/state/replicated_state/node-100/ic_consensus_pool/consensus". Error: IO error: While lock file: /root/.local/share/dfx/network/local/state/replicated_state/node-100/ic_consensus_pool/consensus/LOCK: Resource temporarily unavailable', artifact_pool/src/rocksdb_pool.rs:222:25 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
Any idea to resolve this issue?

working on dbt want to give the details of snowflake in project.yml file but getting error

Trying the give the details of snowfalke in dbt profiles.yml file . but as soon as when ran the command i,e
$atom /home/myname/.dbt/profiles.yml gives below error:
/usr/bin/atom: line 190: 1705 Trace/breakpoint trap (core dumped) nohup "$ATOM_PATH" --executed-from="$(pwd)" --pid=$$ "$#" > "$ATOM_HOME/nohup.out" 2>&1
Failed to move to new namespace: PID namespaces supported, Network namespace supported, but failed: errno = Permission denied
Following things i tried: Ran below commands but still no luck:
1)
$google-chrome --no-gpu --no-sandbox --disable-setuid-sandbox --headless --dump-dom http://www.chromestatus.com
Error:
[0627/161930.251811:ERROR:udev_watcher.cc(61)] Failed to enable receiving udev events.
[0627/161932.565713:ERROR:platform_shared_memory_region_posix.cc(46)] Descriptor access mode (0) differs from expected (2)
[0627/161932.566251:WARNING:crash_handler_host_linux.cc(366)] Could not translate tid - assuming crashing thread is thread group leader; syscall_supported=0
[0627/161932.769040:WARNING:crash_handler_host_linux.cc(366)] Could not translate tid - assuming crashing thread is thread group leader; syscall_supported=0
--2020-06-27 16:19:32-- https://clients2.google.com/cr/report
Resolving clients2.google.com (clients2.google.com)... 2404:6800:4009:805::200e, 172.217.174.238
Connecting to clients2.google.com (clients2.google.com)|2404:6800:4009:805::200e|:443... [0627/161933.036124:ERROR:headless_shell.cc(399)] Abnormal renderer termination.
connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/plain]
Saving to: ‘/dev/fd/4’
0K
Crash dump id: e870824b56e91b9f
$ google-chrome
Error:
Failed to move to new namespace: PID namespaces supported, Network namespace supported, but failed: errno = Permission denied
Trace/breakpoint trap (core dumped)
[1772:1772:0100/000000.825375:ERROR:zygote_linux.cc(653)] write: Broken pipe (32)
[0627/162152.831614:ERROR:nacl_helper_linux.cc(308)] NaCl helper process running without a sandbox!
Most likely you need to configure your SUID sandbox correctly
Could any one advise on the above issue.
This could be an atom error. Could be packages or something that they installed. I could be wrong, but I don’t see anything in that error message that indicates it’s a profiles.yml thing. I wonder if they can open other files in atom just fine?
Alternatively, use a text editor that isn’t based on chrome
Thanks #jake and #Christine from FishTown Analytics.
Here are a few helpful links.
Atom issue;
Blogpost!

Live Migration Failure: unable to execute QEMU command 'migrate': Migration disabled: failed to allocate shared memory

I have a 2 node OpenStack Mitaka environment consisting of a controller/compute node and a compute node.
I've followed the setup guide to enable instance live migration using LVM block storage. I.e.: There's no shared storage backend, just local LVM block storage.
Using OpenStack Horizon to perform the live migration a success message is displayed, however, the migration is far from successful. This worked pretty much out-of-the-box with our Juno installation. I've exhausted Google and cannot find any other instances of people facing the same problem. I thought it might have been a time synchronisation problem so have set both nodes to UTC. Still the problems persists.
Source machine /var/log/nova/nova-compute.log
2016-08-12 15:56:42.120 2230 ERROR nova.virt.libvirt.driver [req-b71ea7b0-5fa8-4b57-92d2-4edec62135c2 b017d86d1143461a92a267d4b912c104 88c686f09e1b427fb750f5c00716f84e - - -] [instance: 5763b6b6-370c-448c-8e8f-8b71eafaa8f1] Migration operation has aborted
2016-08-12 15:56:42.470 2230 ERROR nova.virt.libvirt.driver [req-b71ea7b0-5fa8-4b57-92d2-4edec62135c2 b017d86d1143461a92a267d4b912c104 88c686f09e1b427fb750f5c00716f84e - - -] [instance: 5763b6b6-370c-448c-8e8f-8b71eafaa8f1] Live Migration failure: internal error: unable to execute QEMU command 'migrate': Migration disabled: failed to allocate shared memory
Target node /var/log/libvirt/libvirtd.log
2016-08-12 15:56:41.864+0000: 2170: error : qemuMonitorJSONGetMigrationStatsReply:2443 : internal error: info migration reply was missing return status
2016-08-12 15:56:41.864+0000: 2170: error : virNetClientProgramDispatchError:177 : Cannot open log file: '/var/log/libvirt/qemu/instance-0000006a.log': Device or resource busy
There are no other events captured in the source or target nova or libvirt logs.
I should also note that I am trying to use qemu+tcp (libvirt listening enabled, default tcp port, no auth) rather than qemu+ssh in order to keep things simple while testing. In fact, I intend to only use qemu+tcp anyway.
Which version of ubuntu did you deploy?
I had the same error with ubuntu 14.04 and mitaka version.
And I figured out that default kernel (3.13) makes this problem.
I upgraded the kernel from 3.13 to 4.40 and this problem is gone now.
I hope my experience help you solve this problem out.
Thanks

Zookeeper error connection loss exception

I'm running a SeqWare VM on an amazon EC2 instance I'm trying to use the SeqWare query engine to query data from VCF files. When I first launch the instance and follow the instructions to import data, It works fine, and continues to work until I stop the instance. When I restart it. It won't let me import anything, nor create a new workspace. It always returns the error below. I looked at the processes and found that none of the required nodes were running, so I logged into root and went to the etc/init.d directory and start everything again, at which point, when T try to import data, I don't even get an error and I have to stop the process.
[seqware#master target]$ java -classpath seqware-distribution-0.13.6.7-qe-full.jar com.github.seqware.queryengine.system.importers.SOFeatureImporter -i ../../seqware-queryengine/src/test/resources/com/github/seqware/queryengine/system/FeatureImporter/consequences_annotated.vcf ALL.chr3.phase1_release_v3.20101123.snps_indels_svs.genotypes.3_100001-101000.vcf -o keyValueVCF.out -r hg_19 -s c111aea5-5e18-4c62-a8a7-ec82fe151301 -a ad_hoc -w VCFVariantImportWorker
[SeqWare Query Engine] 0 [main] ERROR org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - ZooKeeper exists failed after 3 retries
[SeqWare Query Engine] 1 [main] ERROR org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher - hconnection Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:154)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:226)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:580)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:569)
at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:186)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:100)
at com.github.seqware.queryengine.impl.HBaseStorage.<init>(HBaseStorage.java:89)
at com.github.seqware.queryengine.factory.SWQEFactory$Storage_Type$3.buildStorage(SWQEFactory.java:109)
at com.github.seqware.queryengine.factory.SWQEFactory.getStorage(SWQEFactory.java:174)
at com.github.seqware.queryengine.factory.SWQEFactory.getQueryInterface(SWQEFactory.java:199)
at com.github.seqware.queryengine.impl.SimpleModelManager.<init>(SimpleModelManager.java:49)
at com.github.seqware.queryengine.impl.HBaseModelManager.<init>(HBaseModelManager.java:36)
at com.github.seqware.queryengine.impl.MRHBaseModelManager.<init>(MRHBaseModelManager.java:32)
at com.github.seqware.queryengine.factory.SWQEFactory.getModelManager(SWQEFactory.java:211)
at com.github.seqware.queryengine.system.importers.FeatureImporter.performImport(FeatureImporter.java:66)
at com.github.seqware.queryengine.system.importers.SOFeatureImporter.runMain(SOFeatureImporter.java:141)
at com.github.seqware.queryengine.system.importers.SOFeatureImporter.main(SOFeatureImporter.java:60)
[SeqWare Query Engine] 3 [main] FATAL org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation - Unexpected exception during initialization, aborting
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:154)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:226)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:580)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:569)
at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:186)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:100)
at com.github.seqware.queryengine.impl.HBaseStorage.<init>(HBaseStorage.java:89)
at com.github.seqware.queryengine.factory.SWQEFactory$Storage_Type$3.buildStorage(SWQEFactory.java:109)
at com.github.seqware.queryengine.factory.SWQEFactory.getStorage(SWQEFactory.java:174)
at com.github.seqware.queryengine.factory.SWQEFactory.getQueryInterface(SWQEFactory.java:199)
at com.github.seqware.queryengine.impl.SimpleModelManager.<init>(SimpleModelManager.java:49)
at com.github.seqware.queryengine.impl.HBaseModelManager.<init>(HBaseModelManager.java:36)
at com.github.seqware.queryengine.impl.MRHBaseModelManager.<init>(MRHBaseModelManager.java:32)
at com.github.seqware.queryengine.factory.SWQEFactory.getModelManager(SWQEFactory.java:211)
at com.github.seqware.queryengine.system.importers.FeatureImporter.performImport(FeatureImporter.java:66)
at com.github.seqware.queryengine.system.importers.SOFeatureImporter.runMain(SOFeatureImporter.java:141)
at com.github.seqware.queryengine.system.importers.SOFeatureImporter.main(SOFeatureImporter.java:60)
I figured it out.The apache services were installed from the cloudera package. They weren't being restarted when the instance was being restarted and apparently, just running their script from the etc/init.d was the incorrect way to do it. I found the commands to restart them in the cloudera documentation.
I too faced this problem.I was able to solve this problem by providing jute.maxbuffer parameter while starting zookeeper.
For more info you can refer
https://issues.apache.org/jira/browse/SOLR-4793

Java 1.5 crash on AIX 5

I have a problem with Java 1.5.0 for AIX. The error happens just when I log on with specific user on AIX (myuser). When I log on with other user java works ok.
The error come up even when I executed just "java -version" or simply "java" (of course, without quoting). I've tried executing it with the full path: /usr/java5/jre/bin/java but still fails.
There was installed the version 1.4 of java on system too. So the $PATH variable for the user contained /usr/java14/jre/bin, but I removed that value, I even uninstalled that version of java (1.4) so that just java 5 exists on the system, but the error continues.
If I execute "java -fullversion" it doesn't crash.
This is part of the error (the full output is very long):
JVMJ9VM011W Unable to load j9dmp23: No such file or directory
JVMJ9VM011W Unable to load j9jit23: No such file or directory
JVMJ9VM011W Unable to load j9gc23: No such file or directory
JVMJ9VM011W Unable to load j9vrb23: No such file or directory
Unhandled exception
Type=Illegal instruction vmState=0x00000000
J9Generic_Signal_Number=00000010 Signal_Number=00000004 Error_Value=00000000
Signal_Code=0000001e
Handler1=F0719CC8 Handler2=F0714F5C
.....
Target=2_30_20091103_45935_bHdSMr (AIX 5.3)
CPU=ppc (4 logical CPUs) (0x7d0000000 RAM)
JavaVMInitArgs.nOptions=14:
-Xjcl:jclscar_23
-Dcom.ibm.oti.vm.bootstrap.library.path=/usr/java5/jre/bin
-Dsun.boot.library.path=/usr/java5/jre/bin
-Djava.library.path=/usr/java5/jre/bin:/usr/java5/jre/bin:/usr/java5/jre/bin/classic:/usr/java5/jre/bin:/sqllib/lib:/home/myuser/comm:/home/myuser/sys:/home/myuser/bin:/db2util/db2adm/sqllib/lib64:/usr/java5/jre/bin/j9vm:/usr/lib
-Djava.home=/usr/java5/jre
-Djava.ext.dirs=/usr/java5/jre/lib/ext
-Duser.dir=/home/myuser
_j2se_j9=70912 (extra info: F070EA2C)
-Xdump
vfprintf (extra info: 300017A4)
-Dinvokedviajava
-Djava.class.path=/db2util/db2adm/sqllib/java/db2java.zip:/db2util/db2adm/sqllib/java/db2jcc.jar:/db2util/db2adm/sqllib/java/sqlj.zip:/db2util/db2adm/sqllib/function:/db2util/db2adm/sqllib/java/db2jcc_license_cu.jar:.
vfprintf
_port_library (extra info: F070EE30)
Note: "Enable full CORE dump" in smit is set to FALSE and as a result there will be limited threading information in core file.
Note: dump may be truncated if "ulimit -c" is set too low
Generated system dump: {default OS core name}
(no Thread object associated with thread)
(no Thread object associated with thread)
Unhandled exception in signal handler
ksh: 2179192 IOT/Abort trap(coredump)
I found the error. The problem is a line on the .profile which sets the environment variable LIBPATH:
export LIBPATH=/home/myuser/sys
I deleted that line in the .profile and java worked.