'Unnamed VM' could not read or update the virtual machine configuration because access was denied: General access denied error (0x80070005). - hyper-v

Today I have added a host to scvmm. And later all VMs on the host are failed to restart and the following error is thrown:
Error (12700) VMM cannot complete the host operation on the serverName
server because of the error: 'VMName' could not initialize. (Virtual
machine ID DDEA27BF-EBCA-49D6-B0BC-F89D83B1FCA4)
'VMName' could not create or access saved state file
D:\Hyper-V\VMName\Virtual
Machines\DDEA27BF-EBCA-49D6-B0BC-F89D83B1FCA4\DDEA27BF-EBCA-49D6-B0BC-F89D83B1FCA4.vsv. (Virtual machine ID DDEA27BF-EBCA-49D6-B0BC-F89D83B1FCA4) Unknown
error (0x8006)
Recommended Action Resolve the host issue and then try the operation
again.

Later we follow the "Hyper-V virtual machine may not start, and you receive a “‘General access denied error’ (0x80070005)” error message" to successfully resolve the issue.
Solution is to grant each VM to access its VM files and directories.
icacls <Path of .vhd or .avhd file> /grant "NT VIRTUAL MACHINE\<Virtual Machine ID from step 1>":(F)
To grant permission to virtual machine folder and its children:
icacls "D:\Hyper-V\Virtual Machine" /grant "NT VIRTUAL MACHINE\DDEA27BF-EBCA-49D6-B0BC-F89D83B1FCA4":(OI)(CI)F

I encountered this error on start-up of my VM after I had changed out a drive on my server. The new drive had the same letter assigned as the previous drive and all the files had been copied over, including my VM folder that contained the .vhd file. I executed the icacls command as given in Amitabha's answer, but this did not resolve the issue for me.
I then read somewhere that the group Authenticated Users should have full access to the folder containing the virtual machine configuration files. So I add the security group Authenticated Users in the security tab of my VM folder properties and gave this group full control rights. After doing this my VM started correctly.

Related

rabbmitmq 3.6.10 allows guest access from remote

Recently I upgraded my ubuntu 12.04 into ubuntu 18.04. RabbitMQ 3.6.10 does not allow guest access from remote anymore.
I have searched online and try this method
Create a config file /etc/rabbitmq/rabbitmq.config with contents
loopback_users = none
or
loopback_users.guest = false
Add environment variable RABBITMQ_CONFIG_FILE as /etc/rabbitmq/rabbitmq.conf.
Give administrator permission to guest.
It still has an error
HTTP access denied: user 'guest' - User can only log in via localhost
It seems rabbitmq.config or rabbitmq.conf is not used as environment variable rabbitmq_config_file or config_file is not used too after I change it into a non-existing file.
I can confirm rabbitmq-env.conf is used.
How should I allow remote access for guest?

Rundeck permission denied issue while executing a job in remote host machine

Earlier I have installed Rundeck in local machine and Everything was working fine for me . Recently I have installed rundeck in remote host machine where ssh and sudo user are different for this machine and they are not in same group .
When I am trying to run the job(python scripts) , it is throwing me below permisision denied message . Do I need to change the user level details somewhere in a file, Please let me know .
/bin/sh: /tmp/4-10-host-machine-dispatch-script.tmp.sh: Permission denied
Result: 126
Failed: NonZeroResultCode: Result code was 126
Thanks,
RK
That means the /tmp directory is restricted in your remote node (some servers setups restrict that by security reasons), you can define a custom copy script path in multiples ways:
1) Node level: defining file-copy-destination-dir attribute at resoruces.xml file, example:
<node name="freebsd11" description="FreeBSD 11 node" tags="unix,freebsd" hostname="192.168.33.41" osArch="amd64" osFamily="unix" osName="FreeBSD" osVersion="11.3-RELEASE" username="youruser" ssh-key-storage-path="keys/rundeck" file-copy-destination-dir="/home/youruser/scripts"/>
2) Project level: Go to Edit Configuration (Rundeck sidebar) > Edit Configuration > Edit Configuration File (up to right button) and add this line:
project.file-copy-destination-dir=/home/youruser/scripts
3) Globally: Stop Rundeck service, add the following line at project.properties (at /etc/rundeck path) file and start Rundeck service again:
framework.file-copy-destination-dir=/home/youruser/script
Just make sure that the custom path is reachable by the remote ssh user. You can check the full documentation here.

Warning: Identity file /home/user/.ssh/id_rsa not accessible: No such file or directory

I'm using Deployer for deploying my code to multiple servers. Today I got this error after starting a deployment:
[Deployer\Exception\RuntimeException (-1)]
The command "if hash command 2>/dev/null; then echo 'true'; fi" failed.
Exit Code: -1 (Unknown error)
Host Name: staging
================
Warning: Identity file /home/user/.ssh/id_rsa not accessible: No such file or directory.
Permission denied (publickey).
First I thought it would probably has something to do with this server configuration since I moved the complete installation to another hosting provider. I tried to trigger a deployment to a server which I deployed to just fine in the past days but then got the same error. This quickly turned my suspicions from server to local.
Since I'm running PHP in docker (Deployer is written in PHP), I thought it might had something to do with my ssh-agent not being forwarded correctly from my host OS to docker. I verified this by using a fresh PHP installation directly from my OS (Ubuntu if that would help). Same warning kept popping up in the logs.
When logging in using the ssh command everything seems to be alright. I still have no clue what going on here. Any ideas?
PS: I also created an issue at Deployer's GIT repo: https://github.com/deployphp/deployer/issues/1507
I have no experience with the library you are talking about, but the issue starts here:
Warning: Identity file /home/user/.ssh/id_rsa not accessible: No such file or directory.
So let's focus on that. Potential things I can think of:
Is the username really user? It says that the file lives at: /home/user. Verifying that that really is the correct path. For instance, just ls the file. If it doesn't exist, you will get an error:
$ ls /home/user/.ssh/id_rsa
That will throw a No such file or directory if it doesn't exist.
If 1. is not the issue, then most likely this is a user issue where the permissions are wrong for the user in the Docker container. If this is the issue, then INSIDE the Docker container, change the permissions on id_rsa before you need to do it:
$ chmod 600 /home/user/.ssh/id_rsa
Now do stuff with the key...
A lot of SSH agents won't work unless the key is only read-write accessible by the user who is trying to run the ssh agent. In this case, that is the user inside of the Docker container.

copying file from local machine to Ubuntu 12.04 returning permission denied

How to I grant myself permission to transfer a .crt file from my local machine to the aws ubuntu 12.04 server?
I am using the following command from my machine and receiving a permission denied response.
scp -i /Users/me/key_pair.pem /Users/me/ssl-bundle.crt ubuntu#ec2-50-150-63-20.eu-west-1.compute.amazonaws.com:/etc/ssl/certs/
I am following comodo's instruction. Refer to the heading Configure your nginx Virtual Host from the link. I have not set anything up with regards to permission as user. This is a little new to me and will appreciate further sources of information.
I changed the permission of the path on the server and transferred the file!
With reference to File Permissions , I gave the /etc/ssl/certs/ path the "Add other write & execute" permission by this chmod command when ssh'd into the Ubuntu server:
sudo chmod o+wx /etc/ssl/certs/
Then, on my local machine, the following command copied a file on my directory and transferred it to destination:
scp -i /Users/me/key_pair.pem /Users/me/ssl-bundle.crt ubuntu#ec2-50-150-63-20.eu-west-1.compute.amazonaws.com:/etc/ssl/certs/
It is the write permission you need, and depending on your use case, use the appropriate chmod command.
Simplest way to transfer files from local to ec2 (or) ec2 to local by FileZila.
You can connect with your instance by using Filezila, then transfer files from local to server and vice-versa.

Access to log folder with Vagrant

I've got a vagrant file which mounts a box with apache.
I would like to access the log directory (/var/log/apache2) of the guest directly in my host using sync folder mechanism (and not vagrant ssh !)
I've tried :
config.vm.synced_folder "./log/", "/var/log/apache2/"
The problem is that my log directory is empty and overrides the /var/log/apache2 making it empty (when I look at it by vagrant ssh). So the error.log file (stored in /var/log/apache2/error.log) is not synchronized to my guest folder ./log (which remains empty) and moreover is erased during the setup of the guest.
How can I configure vagrant to make the synchronization from guest to host and not the other side (host to guest) ?
Depending on your host OS, the following vagrant plugin could help you:
https://github.com/Learnosity/vagrant-nfs_guest
Basically the plugin relies on NFS for exporting folders on the guest and mounting it on the host.