rabbmitmq 3.6.10 allows guest access from remote - rabbitmq

Recently I upgraded my ubuntu 12.04 into ubuntu 18.04. RabbitMQ 3.6.10 does not allow guest access from remote anymore.
I have searched online and try this method
Create a config file /etc/rabbitmq/rabbitmq.config with contents
loopback_users = none
or
loopback_users.guest = false
Add environment variable RABBITMQ_CONFIG_FILE as /etc/rabbitmq/rabbitmq.conf.
Give administrator permission to guest.
It still has an error
HTTP access denied: user 'guest' - User can only log in via localhost
It seems rabbitmq.config or rabbitmq.conf is not used as environment variable rabbitmq_config_file or config_file is not used too after I change it into a non-existing file.
I can confirm rabbitmq-env.conf is used.
How should I allow remote access for guest?

Related

Rundeck permission denied issue while executing a job in remote host machine

Earlier I have installed Rundeck in local machine and Everything was working fine for me . Recently I have installed rundeck in remote host machine where ssh and sudo user are different for this machine and they are not in same group .
When I am trying to run the job(python scripts) , it is throwing me below permisision denied message . Do I need to change the user level details somewhere in a file, Please let me know .
/bin/sh: /tmp/4-10-host-machine-dispatch-script.tmp.sh: Permission denied
Result: 126
Failed: NonZeroResultCode: Result code was 126
Thanks,
RK
That means the /tmp directory is restricted in your remote node (some servers setups restrict that by security reasons), you can define a custom copy script path in multiples ways:
1) Node level: defining file-copy-destination-dir attribute at resoruces.xml file, example:
<node name="freebsd11" description="FreeBSD 11 node" tags="unix,freebsd" hostname="192.168.33.41" osArch="amd64" osFamily="unix" osName="FreeBSD" osVersion="11.3-RELEASE" username="youruser" ssh-key-storage-path="keys/rundeck" file-copy-destination-dir="/home/youruser/scripts"/>
2) Project level: Go to Edit Configuration (Rundeck sidebar) > Edit Configuration > Edit Configuration File (up to right button) and add this line:
project.file-copy-destination-dir=/home/youruser/scripts
3) Globally: Stop Rundeck service, add the following line at project.properties (at /etc/rundeck path) file and start Rundeck service again:
framework.file-copy-destination-dir=/home/youruser/script
Just make sure that the custom path is reachable by the remote ssh user. You can check the full documentation here.

copying file from local machine to Ubuntu 12.04 returning permission denied

How to I grant myself permission to transfer a .crt file from my local machine to the aws ubuntu 12.04 server?
I am using the following command from my machine and receiving a permission denied response.
scp -i /Users/me/key_pair.pem /Users/me/ssl-bundle.crt ubuntu#ec2-50-150-63-20.eu-west-1.compute.amazonaws.com:/etc/ssl/certs/
I am following comodo's instruction. Refer to the heading Configure your nginx Virtual Host from the link. I have not set anything up with regards to permission as user. This is a little new to me and will appreciate further sources of information.
I changed the permission of the path on the server and transferred the file!
With reference to File Permissions , I gave the /etc/ssl/certs/ path the "Add other write & execute" permission by this chmod command when ssh'd into the Ubuntu server:
sudo chmod o+wx /etc/ssl/certs/
Then, on my local machine, the following command copied a file on my directory and transferred it to destination:
scp -i /Users/me/key_pair.pem /Users/me/ssl-bundle.crt ubuntu#ec2-50-150-63-20.eu-west-1.compute.amazonaws.com:/etc/ssl/certs/
It is the write permission you need, and depending on your use case, use the appropriate chmod command.
Simplest way to transfer files from local to ec2 (or) ec2 to local by FileZila.
You can connect with your instance by using Filezila, then transfer files from local to server and vice-versa.

Access to log folder with Vagrant

I've got a vagrant file which mounts a box with apache.
I would like to access the log directory (/var/log/apache2) of the guest directly in my host using sync folder mechanism (and not vagrant ssh !)
I've tried :
config.vm.synced_folder "./log/", "/var/log/apache2/"
The problem is that my log directory is empty and overrides the /var/log/apache2 making it empty (when I look at it by vagrant ssh). So the error.log file (stored in /var/log/apache2/error.log) is not synchronized to my guest folder ./log (which remains empty) and moreover is erased during the setup of the guest.
How can I configure vagrant to make the synchronization from guest to host and not the other side (host to guest) ?
Depending on your host OS, the following vagrant plugin could help you:
https://github.com/Learnosity/vagrant-nfs_guest
Basically the plugin relies on NFS for exporting folders on the guest and mounting it on the host.

Does the .ssh file automatically come installed on a linux system by default?

If I tell someone to look in
~/.ssh
Can I assume that that folder will always exist on a nix filesystem? Specifically, is it always there on the standard distros of linux and MacOsx? I'm following the github generate ssh keys tutorial, and it appears to assume that ssh is something included by default. Is that true?
Update: apparently MAC OSX has an ssh server installed by default, but it is not enabled. according to the log by Chris Double,
The Apple Mac OS X operating system has SSH installed by default but the SSH daemon is not enabled. This means you can’t login remotely or do remote copies until you enable it.
To enable it, go to ‘System Preferences’. Under ‘Internet & Networking’ there is a ‘Sharing’ icon. Run that. In the list that appears, check the ‘Remote Login’ option.
This starts the SSH daemon immediately and you can remotely login using your username. The ‘Sharing’ window shows at the bottom the name and IP address to use. You can also find this out using ‘whoami’ and ‘ifconfig’ from the Terminal application.
On OS X, Ubuntu, CentOS and presumably other linux distros the ~/.ssh directory does not exist by default in a user's home directory. On OS X and most linux distros the ssh-client and typically an ssh server are installed by default so that can be a safe assumption.
The absence of the ~/.ssh directory does not mean that the ssh client is not installed or that an ssh server is not installed. It just means that particular user has not created the directory or used the ssh client before. A user can create the directory automatically by successfully sshing to a host which will add the host to the client's ~/.ssh/known_hosts file or by generating a key via ssh-keygen. A user can also create the directory manually via the following commands.
mkdir ~/.ssh
chmod 700 ~/.ssh
To test whether an ssh client and/or server is installed and accessible on the path you can use the which command. Output will indicate whether the command is installed and in the current user's path.
which ssh # ssh client
which sshd # ssh server
I would say no. I guess on 99% of the systems there is an ssh server running but IMHO in most cases you need to install that software on your own.
And even if it is installed, the directories are created on the first usage of ssh for that user.

'Unnamed VM' could not read or update the virtual machine configuration because access was denied: General access denied error (0x80070005).

Today I have added a host to scvmm. And later all VMs on the host are failed to restart and the following error is thrown:
Error (12700) VMM cannot complete the host operation on the serverName
server because of the error: 'VMName' could not initialize. (Virtual
machine ID DDEA27BF-EBCA-49D6-B0BC-F89D83B1FCA4)
'VMName' could not create or access saved state file
D:\Hyper-V\VMName\Virtual
Machines\DDEA27BF-EBCA-49D6-B0BC-F89D83B1FCA4\DDEA27BF-EBCA-49D6-B0BC-F89D83B1FCA4.vsv. (Virtual machine ID DDEA27BF-EBCA-49D6-B0BC-F89D83B1FCA4) Unknown
error (0x8006)
Recommended Action Resolve the host issue and then try the operation
again.
Later we follow the "Hyper-V virtual machine may not start, and you receive a “‘General access denied error’ (0x80070005)” error message" to successfully resolve the issue.
Solution is to grant each VM to access its VM files and directories.
icacls <Path of .vhd or .avhd file> /grant "NT VIRTUAL MACHINE\<Virtual Machine ID from step 1>":(F)
To grant permission to virtual machine folder and its children:
icacls "D:\Hyper-V\Virtual Machine" /grant "NT VIRTUAL MACHINE\DDEA27BF-EBCA-49D6-B0BC-F89D83B1FCA4":(OI)(CI)F
I encountered this error on start-up of my VM after I had changed out a drive on my server. The new drive had the same letter assigned as the previous drive and all the files had been copied over, including my VM folder that contained the .vhd file. I executed the icacls command as given in Amitabha's answer, but this did not resolve the issue for me.
I then read somewhere that the group Authenticated Users should have full access to the folder containing the virtual machine configuration files. So I add the security group Authenticated Users in the security tab of my VM folder properties and gave this group full control rights. After doing this my VM started correctly.