How make dump of physical memory - virtual-machine

I'd like make dump of physical memory using Qemu. Can I make it with use of virsh? For ex., virsh dump --memory-only? And can I make live dump with use of this feature? Unfortunately, using virsh qemu-monitor-command --hmp "pmemsave 0 0x80000000 win7.dump" calls error: Can't open win7.dump, permission denied.

Related

SQL Server : installation fails with error code 0x851A001A – Wait on the Database Engine recovery handle failed

Details:
SQL Server 2017 (Developer or Express edition)
Windows 2011 OS
I have followed this article already but no avail https://blog.sqlauthority.com/2017/01/27/sql-server-sql-installation-fails-error-code-0x851a001a-wait-database-engine-recovery-handle-failed/
Feature: Database Engine Services
Status: Failed
Reason for failure: An error occurred during the setup process of the feature.
Next Step: Use the following information to resolve the error, uninstall this feature, and then run the setup process again.
Component name: SQL Server Database Engine Services Instance Features
Component error code: 0x851A001A
Error description: Wait on the Database Engine recovery handle failed. Check the SQL Server error log for potential causes.
Error help link: https://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=14.0.1000.169&EvtType=0xD15B4EB2%400x4BDAF9BA%401306%4026&EvtType=0xD15B4EB2%400x4BDAF9BA%401306%4026
WHAT IS THE CAUSE?
this is caused by the sector size of the disks.
During service startup, SQL Server begins the database recovery process to ensure database consistency. Part of this database recovery process involves consistency checks on the underlying filesystem before attempting the activity of the opening system and user database files.
On systems running Windows 11, some new storage devices and device drivers will expose a disk sector size greater than the supported 4 KB sector size.
When this occurs, SQL Server will be unable to start due to the unsupported file system as SQL Server currently supports sector storage sizes of 512 bytes and 4 KB.
You can confirm that you encounter this specific issue by running the command:
fsutil fsinfo sectorinfo E:
Look for the value PhysicalBytesPerSectorForAtomicity, returned in bytes. A value of 4096 indicates a sector storage size of 4 KB.
HOW TO FIX IT!
Simply follow the instructions on this page.
https://learn.microsoft.com/en-us/troubleshoot/sql/admin/troubleshoot-os-4kb-disk-sector-size#resolutions
if you don't want to change the os, you should try this resolution on the above site.
You can add a registry key which will cause the behavior of Windows 11 and later to be similar to Windows 10. This will force the sector size to be emulated as 4 KB in size. To add the ForcedPhysicalSectorSizeInBytes registry key, use the Registry Editor, or you can run one of the following commands in Windows command prompt or PowerShell, executed as an administrator.
After you change the registry, you must restart the device and then reinstall the program. Otherwise, this method will not work!

SQL server : mapping network drive - Insufficient system resources exist to complete the requested service

Hello I am trying to create a new plan on SQL server to backup all my database.
My goal is to backup them to a network drive thus if I do have some trouble with my server, I will be able to restore databases to other server thanks to backup present in the network drive.
When my plan is executed, I do have some error so I try to execute manually the relative query.
After some investigation, it seems even net use command doesn't work (whereas it is working and I do it from cmd)
EXEC XP_CMDSHELL 'net use Z: \\ServerName\loggin/user:loggin password'
error is
System error 1450 has occurred. Insufficient system resources exist to complete the requested service.
Beside, I do have another server where it is working so I suppose some configuration missing but can't find them
as my network drive is also accessible via FTP, I chose this way to make the job : create a batch file that run winscp and use this batch file in a SQL agent job . I need to add right to batch file to SQL Server agent account. I also need to define a credential and a proxy to be used in the job.

Nagios nrpe command fails but local command works

I'm using a custom script to check physical memory.
https://exchange.nagios.org/components/com_mtree/attachment.php?link_id=3329&cf_id=24
(i added the performance data)
Locally run with this:
/usr/lib64/nagios/plugins/check_custom_memory.sh
output:
OK - 30405 MB (96%) Free Memory | total=31513MB used=1108MB
When I run it from the nagios server with this command (hid actual IP for security reasons):
/usr/lib64/nagios/plugins/check_nrpe -t 60 -H xxx.xxx.xxx.xxx -c check_custom_memory.sh -a 10 5
output:
CRITICAL - 30405 MB (%) Free Memory | total=31513MB used=1108MB
It seems that the check_nrpe is excluding the % value. This happens only on this server and not my other servers. All other checks run fine. Any other nrpe check to the remote server works fine too. It seems to be just this one check. It makes me think it's the script but it works for other servers and locally, so i'm at a loss.
the /tmp/memcalc file has 666 permissions and owned by nrpe on the remote server, and I can see it's being written like it should when run locally. When running with check_nrpe, the file is not being accessed or written.
Any ideas why?
I believe I found the issue. Seems like it has something to do with selinux running. Normally we don't use it but this server has it running. It seems to stop access to writing to the file that's created in the /tmp directory to calculate the percentage of free memory.
As a result. I just rewrote the script to not use a temp file and to calculate the percentage using simple math and not being accurate (which is fine).

Invalid cross-device link Redis

I am having an invalid cross-device link error for when Redis rolls its log on a vm. Both directories are on /tmp/ in the vm and appear to have the same disk.
Is there a way to debug this on a virtual machine?
https://github.com/antirez/redis/issues/305
Error trying to rename the temporary AOF: Invalid cross-device link

sftp fails with 'message too long' error

My java program uses ssh/sftp for transferring files into linux machines (obviously...), and my library for doing so is JSch (though it's not to blame).
Now, some of these linux machines, have shell login startup scripts, which tragically causes the ssh/sftp connection to fail, with the following message:
Received message too long 1349281116
After briefly reading about it, it's clearly a known ssh design issue (not a bug - see here). And all suggested solutions are on ssh-server side (i.e. disable scripts which output messages during shell login).
My question - is there on option to avoid this issue on client side?
Check your .bashrc and .bash_profile on the server, remove anything that can echo. For now, comment the lines out.
Try again. You should not be seeing this message again.
put following into the top of file ~/.bashrc on username of id on remote machine
# If not running interactively, don't do anything just return early from .bashrc
[[ $- == *i* ]] || return
this will just exit early from the .bashrc instead of sourcing entire file which you do not want when performing a scp or sftp onto that target remote machine ... depending on shell of that remote username, make this edit on ~/.bashrc
or ~/.bash_profile or ~/.profile or similar
I got that error too, during and sftp get call in a bash script.
And according to the TO's error message, which was similar to mine, it looks like the -B option of the sftp command was set. Although a buffer size of 1349281116 bytes is "a bit" too high.
In my case I also did set the buffer size explicitly (with "good intentions"), which cause the same error message, followed by my set value.
Removing the forced value and letting sftp run with the default of 32K solved the problem to me.
-B buffer_size
Specify the size of the buffer that sftp uses when transferring
files. Larger buffers require fewer round trips at the cost of
higher memory consumption. The default is 32768 bytes.
In case it confirms to be the same issue, that whould suite as client side solution.
NOTE: I had to fix .bashrc output on the remote hosts, not the host that's issuing the scp or sftp command.
Here's a quick-n'-dirty solution, but it seems to work - also on binary files. All credits goes to uvgroovy.
Given file 'some-file.txt', just do:
cat some-file.txt | ssh root:1.1.1.1 /bin/bash -c "cat > /root/some-new-file.txt"
Still, if anyone know a sftp/scp built-in way to do so on client side, it'll be great.