Okay, so my setup:
Windows 8.1 host, CentOS 6.5 guest, Virtualbox 4.3.12
I have a folder in My Documents(Windows) that I use as a shared folder in my guest(CentOS), which is mounted in var/www/htdocs/shared
The purpose of this is to host my web project in the VM, but access and edit the files in Windows. And this works pretty well. The files in the shared folder can be accessed on my host and guest and can be edited as needed. I can access the web service in a browser from Windows just fine.
BUT, when I try to run the files in the shared folder from a browser, I get a 403 forbidden error. The permissions on the guest show as rwxrwxrwx, so I don't know why I don't have permission to access them in a browser, and I can't change these in CentOS.
The ways I mounted the drive is like this:
mount -t vboxsf shared shared
mount -t vboxsf -o rw,exec shared shared
mount -t vboxsf -o rw,exec,uid=1000,gid=1000 shared shared
I got the same results for each.
So, that's my issue. How can I access files in a Virtualbox shared folder from my browser on the host?
To change the permissions on the directory, you can use the dmode and fmode parameters in the mount statement:
mount -t vboxsf -o rw,dmode=775,fmode=775 shared shared
You don't need to specify the uid and gid, but you need to add the apache user to the vboxsf group:
usermod -G vboxsf apache
And finally, what actually made it work is you need to disable selinux. Now I can view/edit my files in Windows and let the VM serve them in a browser. The goal of this was to be able to develop on Windows, but let my web app run in an environment identical to the production server. Hopefully this helps someone.
Related
I have been asked to look at a wordpress site that is on google cloud - the Wordpress admin works fine - the front end of the site doesn't show the css
I believe it to be a file permission issue
Replicating the site and placing it on a different server with correct wordpress file permissions it works fine.
However on google cloud I have issues with trying to change the file permissions.
I have ftp access using Filezilla but can't change file permissions that way and if I try to use the apache ssh console to change file permissions that wont apply either.
So looking at the owner of the folder var/www/html and the group it is showing as www-data not root - so first question is what should be the correct owner and group ?
To change folder & file permissions and ownership do the following.
SSH into the VM, google cloud provide a SSH browser based terminal.
SSH will open a linux terminal, if you are root user no need to type 'sudo' for the following commands.
Type 'sudo vim /etc/apache2/envvars'
read what the config file says, defaults are:
export APACHE_RUN_USER=www-data
export APACHE_RUN_GROUP=www-data
Exit the config file back to the linux terminal command line.
Type the following commands to give Apache appropriate User and Group permissions in the public wordpress directory, change user and group name as appropriate
sudo chown -R www-data:www-data /var/www/html
sudo find /var/www/html -type d -exec chmod 750 {} \;
sudo find /var/www/html -type f -exec chmod 640 {} \;
You can now exit the SSH terminal. Note if you want to see the new permissions in FileZilla press F5 to refresh FileZilla.
This question already has answers here:
WinSCP connect to Amazon AMI EC2 Instance changing user after login to "root"
(6 answers)
Closed last year.
I am trying to use WinSCP to transfer files over to a Linux Instance from Windows.
I'm using private key for my instance to login to Amazon instance using ec2-user. However ec2-user does not have access to write to the Linux instance
How do I sudo su - to access the root directory and write to the Linux box, using WinSCP or any other file transfer method?
Thanks
I know this is old, but it is actually very possible.
Go to your WinSCP profile (Session > Sites > Site Manager)
Click on Edit > Advanced... > Environment > SFTP
Insert sudo su -c /usr/lib/sftp-server in "SFTP Server" (note this path might be different in your system)
Save and connect
Source
AWS Ubuntu 18.04:
There is an option in WinSCP that does exactly what you are looking for:
AFAIK you can't do that.
What I did at my place of work, is transfer the files to your home (~) folder (or really any folder that you have full permissions in, i.e chmod 777 or variants) via WinSCP, and then SSH to to your linux machine and sudo from there to your destination folder.
Another solution would be to change permissions of the directories you are planning on uploading the files to, so your user (which is without sudo privileges) could write to those dirs.
I would also read about WinSCP Remote Commands for further detail.
Usually all users will have write access to /tmp.
Place the file to /tmp and then login to putty , then you can sudo and copy the file.
I just wanted to mention for SUSE Enterprise server V15.2 on an EC2 Instance the command to add to winSCP SFTP server commands is :
sudo su -c /usr/lib/ssh/sftp-server
I didn't have enough Reputation points to add a comment to the original answer but I had to fish this out so I wanted to add it.
ssh to FreePBX and run the commands stated below in your terminal:
sudo nano -f /etc/sudoers.d/my_config_file
YourUserName ALL=(ALL) NOPASSWD:ALL
sudo systemctl restart sshd
Winscp:
under session login ==> Advanced ==> SFTP
Change SFTP Server to:
sudo /usr/libexec/openssh/sftp-server
I do have the same issue, and I am not sure whether it is possible or not,
tried the above solutions are not worked for me.
for a workaround, I am going with moving the files to my HOME directory, editing and replacing the files with SSH.
Tagging this answer which helped me, might not answer the actual question
If you are using password instead of private key, please refer to this answer for tested working solution on Ubuntu 16.04.5 and 20.04.1
https://stackoverflow.com/a/65466397/2457076
I've got a vagrant file which mounts a box with apache.
I would like to access the log directory (/var/log/apache2) of the guest directly in my host using sync folder mechanism (and not vagrant ssh !)
I've tried :
config.vm.synced_folder "./log/", "/var/log/apache2/"
The problem is that my log directory is empty and overrides the /var/log/apache2 making it empty (when I look at it by vagrant ssh). So the error.log file (stored in /var/log/apache2/error.log) is not synchronized to my guest folder ./log (which remains empty) and moreover is erased during the setup of the guest.
How can I configure vagrant to make the synchronization from guest to host and not the other side (host to guest) ?
Depending on your host OS, the following vagrant plugin could help you:
https://github.com/Learnosity/vagrant-nfs_guest
Basically the plugin relies on NFS for exporting folders on the guest and mounting it on the host.
If I tell someone to look in
~/.ssh
Can I assume that that folder will always exist on a nix filesystem? Specifically, is it always there on the standard distros of linux and MacOsx? I'm following the github generate ssh keys tutorial, and it appears to assume that ssh is something included by default. Is that true?
Update: apparently MAC OSX has an ssh server installed by default, but it is not enabled. according to the log by Chris Double,
The Apple Mac OS X operating system has SSH installed by default but the SSH daemon is not enabled. This means you can’t login remotely or do remote copies until you enable it.
To enable it, go to ‘System Preferences’. Under ‘Internet & Networking’ there is a ‘Sharing’ icon. Run that. In the list that appears, check the ‘Remote Login’ option.
This starts the SSH daemon immediately and you can remotely login using your username. The ‘Sharing’ window shows at the bottom the name and IP address to use. You can also find this out using ‘whoami’ and ‘ifconfig’ from the Terminal application.
On OS X, Ubuntu, CentOS and presumably other linux distros the ~/.ssh directory does not exist by default in a user's home directory. On OS X and most linux distros the ssh-client and typically an ssh server are installed by default so that can be a safe assumption.
The absence of the ~/.ssh directory does not mean that the ssh client is not installed or that an ssh server is not installed. It just means that particular user has not created the directory or used the ssh client before. A user can create the directory automatically by successfully sshing to a host which will add the host to the client's ~/.ssh/known_hosts file or by generating a key via ssh-keygen. A user can also create the directory manually via the following commands.
mkdir ~/.ssh
chmod 700 ~/.ssh
To test whether an ssh client and/or server is installed and accessible on the path you can use the which command. Output will indicate whether the command is installed and in the current user's path.
which ssh # ssh client
which sshd # ssh server
I would say no. I guess on 99% of the systems there is an ssh server running but IMHO in most cases you need to install that software on your own.
And even if it is installed, the directories are created on the first usage of ssh for that user.
I have a NFS partition on the host, if add it to a container with
docker run -i -t -v /srv/nfs4/dir:/mnt ubuntu
/mnt will contain the shared data, but doesn't it cause conflicts? Since it hasn't been mounted with nfs-client?
Docker uses bind mounts to share host directories with containers. Docker handles namespace permission so that the container can access the mount. Otherwise from the host's perspective, the bind mounted NFS share is just being accessed by another process. It's safe to bind mount an NFS share elsewhere on the filesystem. Using it from within a Docker container is no different.
As of Docker 1.7+ you can use a Volume Plugin. See the Docker Volume Plugin section for details.
As far as NFS goes you can use the Docker Netshare plugin which handles mounding NFS, CIFS and AWS EFS file systems.
You have to share /srv/nfs4/ in your default docker machine. Go to virtualbox > default (or boot2docker) > settings > Shared Folder