Why am I getting Guru Meditation -79 (VERR_INVALID_STATE) in VirtualBox? - virtual-machine

I've been running a guest OS in VirtualBox and, while I was doing some stuff in my host OS, the computer locked up. So, I had to reboot from the ON/OFF button of my laptop.
After that, whenever I try to run the guest OS in VirtualBox, I get the guru meditation error.
I've been searching for the solution on the Internet, but I couldn't solve it yet.
Virtualization is enabled on the BIOS menu, RAM memory for the guest OS is set to 1GB (my computer has 8GB RAM), and even so it doesn't work. I've also tried to reinstall VirtualBox and delete and import again the virtual machine image but no success.
Here I attach an excerpt of the log data file:
00:00:00.953342 VMEmt: Halt method global1 (5)
00:00:00.953439 VMEmt: HaltedGlobal1 config: cNsSpinBlockThresholdCfg=2000
00:00:00.953455 Changing the VM state from 'CREATING' to 'CREATED'
00:00:00.954529 SharedFolders host service: Adding host mapping
00:00:00.954541 Host path '/home/aaron/Trabajo/PSE4RAW', map name 'vagrant', writable, automount=false, automntpnt=, create_symlinks=true, missing=false
00:00:00.954948 Changing the VM state from 'CREATED' to 'POWERING_ON'
00:00:00.955002 AIOMgr: Endpoints without assigned bandwidth groups:
00:00:00.955010 AIOMgr: /home/aaron/VirtualBox VMs/PSE4RAW_default_1643020476130_80381/box-disk001.vmdk
00:00:00.955109 Changing the VM state from 'POWERING_ON' to 'RUNNING'
00:00:00.955124 Console: Machine state changed to 'Running'
00:00:00.956684 Changing the VM state from 'RUNNING' to 'GURU_MEDITATION'
00:00:00.956703 Console: Machine state changed to 'GuruMeditation'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
00:00:00.956894 !!
00:00:00.956894 !! VCPU0: Guru Meditation -79 (VERR_INVALID_STATE)
00:00:00.956898 !!
00:00:00.956902 !!
00:00:00.956903 !! {mappings, <NULL>}
00:00:00.956903 !!
00:00:00.956913 !!
00:00:00.956913 !! {hma, <NULL>}
00:00:00.956914 !!
00:00:00.956915 Hypervisor Memory Area (HMA) Layout: Base 00000000a0000000, 0x02800000 bytes
00:00:00.956917 00000000a06c7000-00000000a06e8000 00007f187aaab000 ffffa95341e65000 LOCKED alloc once (PGM_PHYS)
00:00:00.956921 00000000a06b9000-00000000a06c7000 00007f18b0033000 ffffa95341ce1000 LOCKED alloc once (VMM)
00:00:00.956922 00000000a06ab000-00000000a06b9000 00007f18b8006000 ffffa95341b49000 LOCKED alloc once (VMM)
00:00:00.956924 00000000a02aa000-00000000a06ab000 00007f187a0d3000 ffffa95346d0b000 LOCKED alloc once (PGM_PHYS)
00:00:00.956926 00000000a027b000-00000000a02aa000 00007f18b0041000 ffffa953441a1000 LOCKED alloc once (PGM_POOL)
00:00:00.956927 00000000a0278000-00000000a027b000 00007f18c41ac000 ffffa953403ab000 LOCKED alloc once (CPUM_CTX)
00:00:00.956929 00000000a0038000-00000000a0278000 00007f187a81c000 ffffa953468c9000 LOCKED Heap
00:00:00.956931 00000000a0023000-00000000a0038000 00007f18b8026000 ffffa95341eb3000 LOCKED VMCPU
00:00:00.956932 00000000a000e000-00000000a0023000 00007f18b803b000 ffffa95341e9d000 LOCKED VMCPU
00:00:00.956934 00000000a0000000-00000000a000e000 00007f18b8050000 ffffa95341e8d000 LOCKED VM
00:00:00.956935 !!
00:00:00.956936 !! {cpumguest, verbose}
00:00:00.956936 !!
I hope you can help me and thank you in advance!

Related

Can an end point is connected to more than one router in a NoC topology in gem5 garnet3.0?

I am running gem5 version 22.0.0.2. I operate Garnet in a standalone manner in conjunction with the Garnet Synthetic Traffic injector. I want to emulate a routerless NoC so I guess I need to connect an end point (e.g, Cores, Caches, Directories) to more than one "local" router. I just use a python configuration to configure the topology. But when I do this, there is a runtime error:
build/NULL/mem/ruby/network/garnet/GarnetNetwork.cc:125: info: Garnet version 3.0
build/NULL/base/stats/group.cc:121: panic: panic condition statGroups.find(name) != statGroups.end() occurred: Stats of the same group share the same name `power_state`.
Memory Usage: 692360 KBytes
Program aborted at tick 0
Here is a description from the gem5 documentation: "Each network interface is connected to one or more “local” routers which is could be connected through an “External” link." Here is the link:https://www.gem5.org/documentation/general_docs/ruby/heterogarnet/
Here is the constructor of Stats::Group
Group(Group *parent, const char *name = nullptr)
Here is a description from the gem5 documentation: "there are special cases where the parent group may be null. One such special case is SimObjects where the Python code performs late binding of the group parent."
Here is the link:https://www.gem5.org/documentation/general_docs/statistics/api.
I guess the error may be related to this, but I don't know the exact reason.
Any help would be appreciated.
Thank you.

geth states eth_submitHashrate while mining with Claymore on Windows 10 with 2 GPU's

I am aiming on GPU-mining Ethereum on a Windows 10 PC with 2 Radeon RX590.
geth version is
1.9.9-stable-01744997
cmd call to start geth:
geth --rpc --syncmode "fast" --cache 4096 --etherbase [ADR] --datadir "[MyDataDir]" --mine --minerthreads 0
Blockchain is up to date and everything seems fine on the geth side.
Used Miner is
Claymore's Dual GPU Miner - v15.0
cmd to start miner:
EthDcrMiner64.exe -epool http://127.0.0.1:8545 -mode 1 -tt 75
Now the miner starts and seems to start mining. GPU's show they are doing massive work.
Once the miner is initiated it only permanently outputs something like this (plus every once in a while some GPU info):
ETH: 12/21/19-15:46:33 - New job from 127.0.0.1:8545
ETH - Total Speed: 21.345 Mh/s, Total Shares: 0, Rejected: 0, Time: 45:52
ETH: GPU0 10.665 Mh/s, GPU1 10.680 Mh/s
So this looks good.
In the geth console meanwhile I get this output:
INFO [12-21|15:46:35.446] Imported new chain segment blocks=1 txs=74 mgas=9.921 elapsed=159.999ms mgasps=62.007 number=9141165 hash=05972d…032349 dirty=1019.58MiB
INFO [12-21|15:46:35.459] Commit new mining work number=9141166 sealhash=35129c…59de27 uncles=0 txs=0 gas=0 fees=0 elapsed=999.3µs
INFO [12-21|15:46:35.720] Commit new mining work number=9141166 sealhash=3788e2…df83fc uncles=0 txs=39 gas=9922304 fees=0.0347883012 elapsed=261.998ms
WARN [12-21|15:46:36.032] Served eth_submitHashrate conn=127.0.0.1:54083 reqid=6 t=0s err="the method eth_submitHashrate does not exist/is not available"
INFO [12-21|15:46:38.548] Commit new mining work number=9141166 sealhash=7451f4…69a431 uncles=0 txs=72 gas=9911680 fees=0.04369322037 elapsed=89.942ms
WARN [12-21|15:46:41.120] Served eth_submitHashrate conn=127.0.0.1:54083 reqid=6 t=0s err="the method eth_submitHashrate does not exist/is not available"
There is this warning/error message:
err="the method eth_submitHashrate does not exist/is not available"
But also it states "Commit new mining work".
I am quite unsure now.
Do I mine or do I only waste electric power as the work is never commited?
You have not connected to any mining pools, only to your own geth node, meaning that you are mining alone and competing against the whole world. When mining solo, you have no shares. You either mine a whole block or get nothing. It is extremely hard to mine all alone, thus it is advised to join a mining pool. Claymore-Dual-Miner (CDM) has a list of available mining pool alternatives.
Also, when mining solo with CDM, you can miss mined messages because in this mode, it uses HTTP protocol instead of the Stratum pool protocol. You can manually check your balance on etherscan at any time, though.
Use PhoenixMiner 5e with -rate 2 command. It will stop showing this error.

cudaError_t 1 : "__global__ function call is not configured" returned from 'cublasCreate(&handle_)'

I run ASR experiment using Kaldi on SGE cluster consisting of two workstation with TITAN XP.
And randomly I meet the following problem:
ERROR (nnet3-train[5.2.62~4-a2342]:FinalizeActiveGpu():cu-device.cc:217) cudaError_t 1 : "__global__ function call is not configured" returned from 'cublasCreate(&handle_)'
I guess something is wrong with GPU driver or hardware.
Could you please offer some help?
And here is the complete log
I had similar issue in running darknet in one of the TX2
with reference to
https://blog.csdn.net/JIEJINQUANIL/article/details/103091537
enter the root by
sudo su
Then source the catkin_ws
Then launch the darkent.
Then can run.
Here is my result
Hope you can solve it by similar method

Google Cloud - Compute Engine - Error starting instance when attached disk in READ_ONLY

I have a instance (F1-Micro) and a root persistent disk (10GB) on Google Cloud, Compute Engine service. When I start an instance and attach a disk in READ_WRITE the instance starts normally, executes my startup script and I can access through SSH. However, when I change the disk mode parameter to READ_ONLY the instance apparently starts normally and I can't do SSH giving me a timeout connection. Also, my startup script does not start. I suspect that or I need to attach more one root persistent disk with READ_WRITE permission or I need to setup some configuration in my disk. Someone could give me insight about what are happening? Below I give some data and logs:
Body Request:
instance = {
'name': instance_name,
'machineType': machine_type_url,
'disks': [{
'index' : 0,
'autoDelete': 'false',
'boot': 'true',
'type': 'PERSISTENT',
'mode' : 'READ_ONLY', # READ_ONLY, READ_WRITE
'deviceName' : root_disk_name,
'source' : source_root_disk
}],
'networkInterfaces': [{
'accessConfigs': [{
'type': 'ONE_TO_ONE_NAT',
'name': 'External NAT'
}],
'network': network_url
}],
'serviceAccounts': [{
'email': service_email,
'scopes': scopes
}]
}
Log of started instance on GC-CE:
Changing serial settings was 0/0 now 3/0
Start bios (version 1.7.2-20131007_152402-google)
No Xen hypervisor found.
Unable to unlock ram - bridge not found
Ram Size=0x26600000 (0x0000000000000000 high)
Relocating low data from 0x000e10a0 to 0x000ef780 (size 2161)
Relocating init from 0x000e1911 to 0x265d07a0 (size 63291)
CPU Mhz=2601
=== PCI bus & bridge init ===
PCI: pci_bios_init_bus_rec bus = 0x0
=== PCI device probing ===
Found 4 PCI devices (max PCI bus is 00)
=== PCI new allocation pass #1 ===
PCI: check devices
=== PCI new allocation pass #2 ===
PCI: map device bdf=00:03.0 bar 0, addr 0000c000, size 00000040 [io]
PCI: map device bdf=00:04.0 bar 0, addr 0000c040, size 00000040 [io]
PCI: map device bdf=00:04.0 bar 1, addr febff000, size 00001000 [mem]
PCI: init bdf=00:01.0 id=8086:7110
PIIX3/PIIX4 init: elcr=00 0c
PCI: init bdf=00:01.3 id=8086:7113
Using pmtimer, ioport 0xb008, freq 3579 kHz
PCI: init bdf=00:03.0 id=1af4:1004
PCI: init bdf=00:04.0 id=1af4:1000
Found 1 cpu(s) max supported 1 cpu(s)
MP table addr=0x000fdaf0 MPC table addr=0x000fdb00 size=240
SMBIOS ptr=0x000fdad0 table=0x000fd9c0 size=269
Memory hotplug not enabled. [MHPE=0xffffffff]
ACPI DSDT=0x265fe1f0
ACPI tables: RSDP=0x000fd990 RSDT=0x265fe1c0
Scan for VGA option rom
WARNING - Timeout at i8042_flush:68!
All threads complete.
Found 0 lpt ports
Found 0 serial ports
found virtio-scsi at 0:3
Searching bootorder for: /pci#i0cf8/*#3/*#0/*#0,0
Searching bootorder for: /pci#i0cf8/*#3/*#0/*#1,0
virtio-scsi vendor='Google' product='PersistentDisk' rev='1' type=0 removable=0
virtio-scsi blksize=512 sectors=20971520
Searching bootorder for: /pci#i0cf8/*#3/*#0/*#2,0
...
Searching bootorder for: /pci#i0cf8/*#3/*#0/*#255,0
Scan for option roms
Searching bootorder for: HALT
drive 0x000fd950: PCHS=0/0/0 translation=lba LCHS=1024/255/63 s=20971520
Space available for UMB: 000c0000-000eb800
Returned 122880 bytes of ZoneHigh
e820 map has 6 items:
0: 0000000000000000 - 000000000009fc00 = 1 RAM
1: 000000000009fc00 - 00000000000a0000 = 2 RESERVED
2: 00000000000f0000 - 0000000000100000 = 2 RESERVED
3: 0000000000100000 - 00000000265fe000 = 1 RAM
4: 00000000265fe000 - 0000000026600000 = 2 RESERVED
5: 00000000fffbc000 - 0000000100000000 = 2 RESERVED
Unable to lock ram - bridge not found
Changing serial settings was 3/2 now 3/0
enter handle_19:
NULL
Booting from Hard Disk...
Booting from 0000:7c00
[ 1.386940] i8042: No controller found
Loading, please wait...
INIT: version 2.88 booting
[[36minfo[39;49m] Using makefile-style concurrent boot in runlevel S.
[....] Starting the hotplug events dispatcher: udevd[?25l[?1c7[1G[[32m ok [39;49m8[?25h[?0c.
[....] Synthesizing the initial hotplug events...[?25l[?1c7[1G[[32m ok [39;49m8[?25h[?0cdone.
[....] Waiting for /dev to be fully populated...[ 8.524814] piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
[?25l[?1c7[1G[[32m ok [39;49m8[?25h[?0cdone.
[....] Activating swap...[?25l[?1c7[1G[[32m ok [39;49m8[?25h[?0cdone.
[....] Checking root file system...fsck from util-linux 2.20.1
fsck.ext4: Operation not permitted while trying to open /dev/sda1
You must have r/w access to the filesystem or be root
fsck died with exit status 8
[?25l[?1c7[1G[[31mFAIL[39;49m8[?25h[?0c[31mfailed (code 8).[39;49m
[....] An automatic file system check (fsck) of the root filesystem failed. A manual fsck must be performed, then the system restarted. The fsck should be performed in maintenance mode with the root filesystem mounted in read-only mode. ...[?25l[?1c7[1G[[31mFAIL[39;49m8[? 25h[?0c [31mfailed![39;49m
[....] The root filesystem is currently mounted in read-only mode. A maintenance shell will now be started. After performing system maintenance, press CONTROL-D to terminate the maintenance shell and restart the system. ...[?25l[?1c7[1G[[33mwarn[39;49m8[?25h[?0c [33m(warning).[39;49m
sulogin: root account is locked, starting shell
root#localhost:~#
Thanks!
You're correct in assuming that the boot disk for a GCE instance needs to appear in read-write mode. The documentation for root persistent disks says:
To start an instance with an existing root persistent disk in gcutil,
provide the boot parameter when you attach the disk. When you create a
root persistent disk using a Google-provided image, you must attach it
to your instance in read-write mode. If you try to attach it in
read-only mode, your instance may be created successfully, but it
won't boot up correctly.

Who wakes the kthreadd daemon during SD card read?

I would like to know who wakes the kthread daemon up, when a read from the SD card is done using vfs_read.
According to the code flow the kthreadd will wake up the mmcqd (mmc_queue_thread) which will process the read/write requests to the SD driver.
The issue I am facing here is although the vfs_read to the SD card is called by the USB Mass Storage driver, the read does not proceed to the mmc_queue_thread. This leads to old contents of the SD card being shown on PC.
Here is the kernel stack after vfs_read generated from sdhci_send_command().
------------[ cut here ]------------
WARNING: at /vobs/iandroid/src/kernel/drivers/mmc/host/mx_sdhci.c:495 sdhci_send_command+0x120/0x758()
Modules linked in: g_mot_android mxc91341_oh_udc sipcttydrv aplogger coredump bploader sipcdrv mu_drv
[] (dump_stack+0x0/0x14) from [] (warn_slowpath+0x68/0x84)
[<c009cd3c>] (warn_slowpath+0x0/0x84) from [<c0249000>] (sdhci_send_command+0x120/0x758)
r3:00000033 r2:00000000
r7:c6605f04 r6:c6605f5c r5:c65305c0 r4:c0409510
[<c0248ee0>] (sdhci_send_command+0x0/0x758) from [<c0249a98>] (sdhci_request+0x188/0x1bc)
[<c0249910>] (sdhci_request+0x0/0x1bc) from [<c0240834>] (mmc_wait_for_req+0x110/0x128)
r8:c656f870 r7:c6605dc8 r6:00000000 r5:c6530400 r4:c6605ef0
[<c0240724>] (mmc_wait_for_req+0x0/0x128) from [<c02479f8>] (mmc_blk_issue_rq+0x1f4/0x7b0)
r7:c642ae00 r6:00000000 r5:c64daeac r4:c64daea0
[<c0247804>] (mmc_blk_issue_rq+0x0/0x7b0) from [<c0248464>] (mmc_queue_thread+0x134/0x154)
[<c0248330>] (mmc_queue_thread+0x0/0x154) from [<c00b2e88>] (kthread+0x54/0x80)
r8:00000000 r7:00000000 r6:00000000 r5:c0248330 r4:fffffffc
[<c00b2e34>] (kthread+0x0/0x80) from [<c009f3a8>] (do_exit+0x0/0x738)
r5:00000000 r4:00000000
---[ end trace 24b57c573e7a44e3 ]---