I was searching hours on the Internet, but for this specific problem I could not find any solution.
1: I have a Xubuntu Linux on my PC. I use it in average way: browse the Internet, watch videos, etc. And also it gives home for my PHPStorm app but no the project files. This is the HOST. It has a host-only network: 192.168.56.1
2: I have a VirtualBox Debian Linux (no GUI) system. This meant to be represent a development version of my real webserver. It has all the project files. This VM is on an external drive, so I can take it everywhere (e.g.: to the office). 192.168.56.101. This is the GUEST.
3: on the HOST I use dnsmasq to force every *.dev domain to be redirected to the GUEST. So I can test my projects easily.
4: on the GUEST I exported the /var/www folder in the /etc/exports:
/var/www 192.168.56.1(rw,sync,no_root_squash,no_subtree_check)
The problem: I want to use the PHPStorm on the HOST to edit the files on the GUEST "locally". But I cannot mount the GUEST's /var/www folder into the HOST's /home/gabor/Projects folder with full permissions. I tried to use the following:
$> sudo mount 192.168.56.101:/var/www /home/gabor/Projects
This looks okay for the first time, but the folder is mounted with nobody:nogoup and I have no permissions to edit.
I want the /home/gabor/Projects has the owner gabor:gabor and everything I create in this folder must has the owner www-data:www-data on the Debian side. But for NFS mounting I cannot specify the user.
$> sudo mount -o umask=0022,gid=1000,uid=1000 192.168.56.101:/var/www /home/gabor/Projects
mount.nfs: an incorrect mount option was specified
I also failed to mount --bind the /var/www with different user (should be nobody:nogroup) on the Debian, so that I could export that one...
How can I solve this problem?
Please help me.
Thank you.
NFS v2 and v3 do not support uid/gid.
on Ubuntu man nfs
Adding this answer for posterity, as I ended up here with the same question.
Try this in /etc/export:
/var/www 192.168.56.1(rw,root_squash)
Then on the client, put this in /etc/fstab:
192.168.56.101:/var/www /home/gabor/Projects nfs defaults,user,noauto,relatime,rw 0 0
The user option will allow a non-root user to mount the volume. Adjust other options as needed.
Then on the client again, become the user you want to mount the volume as, and then mount the volume you added to /etc/fstab:
$ id
uid=1000(gabor) gid=1000(gabor) groups=1000(gabor)
$ mount /home/gabor/Projects
$
Make sure that the uid and/or gid are the same on the server. I'm not sure if the usernames can be different or not. Also make sure that the directory being exported on the server is writable by the user or group. See this blog post for additional info about setting up NFS in a similar manner.
Caution: This is an insecure configuration without authentication. Use NFS v4 with Kerberos for strong authentication.
Ok, I found a solution that exactly does what I want.
First, install the sshfs:
$> sudo apt-get install sshfs
Then mount the remote /var/www:
$> sshfs -o uid=33,gid=33 root#192.168.56.101:/var/www /home/gabor/Projects
And that is it!
$> ls -la /home/gabor | grep Projects
drwxr-xr-x 1 www-data www-data 4096 Okt 14 21:10 Projects
Related
I am install nfs using this command in fedora 32:
sudo dnf install nfs-utils
and then I create a dir to export storage:
[dolphin#MiWiFi-R4CM-srv infrastructure]$ cat /etc/exports
/home/dolphin/data/k8s/monitoring/infrastructure/jenkins *(rw,no_root_squash)
now I could mount this dir with root user like this:
sudo mount -t nfs -o v3 192.168.31.2:/home/dolphin/data/k8s/monitoring/infrastructure/jenkins /mnt
now I want to make a step forward to make it it avaliable to any user from any ip(the client could mount nfs without using sudo), so I first try to chown of this folder:
chown 777 jenkins
and then I want to make this jenkins folder group and user to nfsnobody:
[dolphin#MiWiFi-R4CM-srv infrastructure]$ chown -R nfsnobody jenkins
chown: invalid user: ‘nfsnobody’
and I do not find any nfsnobody content from /etc/passwd. what should I do to fix invalid user: ‘nfsnobody’ problem? should nfs-util added it automatically?
Right now nobody used by default probably after RedHat/Centos versions 8
You can simply use
chown -R nobody jenkins
Or
Change it from /etc/idmapd.conf
[Mapping]
Nobody-User = nfsnobody
Nobody-Group = nfsnobody
To put the changes into effect restart the rpcidmapd service and remount the NFSv4 filesystem:
service rpcidmapd restart
mount -o remount /nfs/mnt/point
On Red Hat Enterprise Linux 6, if the above settings have been applied and UID/GID’s are matched on server and client and users are still being mapped to nobody:nobody then a clearing of the idmapd cache may be required.
# nfsidmap -c
I am trying to setup a script that will:
Connect to a windows share
Using LOAD DATA LOCAL INFILE, upload the two files into their appropriate db tables
Umount share
Situation:
I can currently vpnc into this remote machine
Problem:
I cannot
mount -t cifs //ip.address/share /mnt/point -o username=u,password=p,port=445
mount error(110) Connection timed out
I am attempting to do this manually first
Remote server is open to port 445
Questions:
Do I even need to vpnc in first?
Do I need to do route add for the remote ip/mask/gw after vpnc?
Thank you!
The mount.cifs file is provided by the samba-client package. This can be installed from the standard CentOS yum repository by running the following command:
yum install samba samba-client cifs-utils
Once installed, you can mount a Windows SMB share on your CentOS server by running the following command:
Syntax:
mount.cifs //SERVER_ADDRESS/SHARE_NAME MOUNT_POINT -o user=USERNAME
SERVER_ADDRESS: Windows system’s IP address or hostname
SHARE_NAME: The name of the shared folder configured on the Windows system
USERNAME: Windows user that has access to this share
MOUNT_POINT: The local mount point on your CentOS server
I am mounting to a share from \\10.11.10.26\snaps
Make a directory under mount for your reference
mkdir /mnt/mymount
Now I am mounting the snaps folder from indiafps02, User name is the Domain credentials, i.e. Mydomain in this case
mount.cifs //10.11.10.26/snaps /mnt/mymount -o user=Girish.KG
Now you could see the content by typing
ls /mnt/mymount
So, after performing your task, just fire umount command
umount /mnt/mymount
That's it. You are done.
no need to install "samba" and "samba-client", only "cifs-utils" using command
yum install cifs-utils
after that in windows share the folder you would like to mount in centos if you didn't do that already ("c:\interpub\wwwroot" in my case).
make sure you share it with a specific username whom your know the password for ("netops" in my case).
create a directory in centos in which you would like to mount the windows share in to ("/mnt/cm" in my case).
after that run that simple command as a root
mount.cifs //10.16.0.160/wwwroot /mnt/cm/ -o user=netops
centos will prompt you for the windows username password.
you are done.
I am getting the following error trying to mount a nfs export.
sudo mount 192.168.1.175:/mnt/nas /mnt/c/nas
mount.nfs: No such device
Any ideas on how to fix this?
As of October 2020: You can mount nfs with wsl2, but wsl2 itself requires a hardware virtualization available. See here: https://github.com/microsoft/WSL/issues/5838
If like me you are stuck on WSL1 you can work around this issue by mapping the drive in windows. Use the Map Network Drive feature and create a drive letter for your nfs mount e.g. G:
Now in WSL you can mount that drive letter:
sudo mkdir /mnt/g
sudo mount -t drvfs G: /mnt/g
from: How to Mount Windows Network Drives in WSL
I have not tested the access speed to a drive mapped through to WSL like this but I would expect it to be slow!
The error indicates the nfs kernel modules are not loaded correctly and
also verify the exported path "/mnt/nas" exists on sever "192.168.1.175" or not.
first of all,we understand nfs is one of tctp/ip protocol, so one client and one server are needed, So our purpose is sharing a dir on windows or wsl to a another linux, that means the windows or wsl is a server, all you guys are right about wsl nfs, it doesnt work if we use the wsl nfs inside, we can make a another nfs server on windows instead of wsl, and configure the share dirs right which we can find the dirs on wsl, e.g. /mnt/d/WORK/tftpserverDir, after that we can mount successfully. those are tips of me:
make a nfs server on windows
I dowwnload from this:
https://www.hanewin.net/nfs-e.htm
configure the shared dir in exports file
D:\WORK\tftpserverDir -name:nfsroot -umask:000 -public -mapall:0
mount the share dirs on your dst linux
mount -t nfs -o nolock -o tcp -o rsize=32768,wsize=32768 172.10.10.80:/nfsroot /sdcard/mnt
I recently set up Lamp stack on ubuntu 14.04 for my web server. I'm working through Digital Ocean. These are the steps I went through...
On local machine I logged in to my web server with
sftp user#web_server_ip
Then
sftp> cd /var/www/html
How would I go upon getting onto my local machine to get the file for the site? And how would I transfer them?
I know that I have to use the [get] and [put] commands
I'm just confused what's considered local/remote? if I'm logged into the remote server on my local machine. Overthinking it?
This is the tutorial I'm trying to follow: How To Use SFTP to Securely Transfer Files with a Remote Server
Edit:
So I tried moving a whole directory from my local machine and this is what I ended up doing
scp -r /path/directory_name name#ip_address:/var/www/html
scp: /var/www/html/portfolio.take7: Permission denied
Should I be changing permission by using sudo prior to scp -r?
Edit2:
I have also tried
Where_directory_is$ scp -r /path/directory_name name#ip_address:/var/www/html
/var/www/html: No such file or directory
It might be easier to start with SCP which allows you to copy files with one command. So for example, if you had a local file /path/filename.css and wanted to transfer it to your server, you could use the following command on your local machine:
scp /path/filename.css username#remote_hostname_or_IP:~
This command copies the local file and transfers it to the home directory of the username on the remote server using SSH. You can then SSH in (ssh username#remote_hostname_or_IP) and then do what you need with the file sitting in your home directory, such as move it to the proper Apache directory.
Once you start to get more comfortable, you can switch to sftp if you like.
Update
Here is how to set up your Apache permissions. Let's say you have an account you on the linux computer running Apache, and we'll say the IP is 192.168.1.100.
On your local machine, create this shell script, secure.sh, and remember shell scripts need to have execute privileges (chmod +x secure.sh). Fill it with the following contents:
#!/usr/bin/env bash
# Lockdown the public web files
find /var/www -exec chown you:www-data {} \;
find /var/www -type d -exec chmod -v 750 {} \;
find /var/www -type f -exec chmod -v 640 {} \;
This shell script is setting the permissions for anything in the /var/www/ directory to be 750 for the directories and 640 for the files. This gives you complete read/write permissions for the files and www-data (which is the account for Apache) read permissions. Run this anytime you have uploaded files to ensure the permissions are always set correctly.
Next, SSH into your remote computer and go to the /var/www/html directory. Ensure that the ownership is not set to root. If it is, scp the secure.sh file into your remote computer, become root and run it. This only needs to be done once, so you can remotely set the permissions.
Now you can copy directly to /var/www/ through the scp -r command on your local computer from the top of the directory you wish to copy to /var/www/html:
scp -r ./ you#192.168.1.100:/var/www/html/
Then run this command to remotely run the secure.sh shell script and send the output to out.txt:
ssh you#192.168.1.100 -p 23815 ./secure.sh > out.txt
Then cat out.txt to see that the file permissions changed accordingly.
If this is a public facing computer, then you must add an SSH key to your scp connection. You can use this tutorial to find out more about generating your own keys, it is quite easy. To use the key, you only need to add -i private_key_file to your scp and ssh commands. Lastly, it would actually be safer to keep the /var/www files as root, SSH into the computer, su to become root, then run secure.sh as root (with the owner changed to root in the shell script). It all depends on the level of security you need to worry about. If it is a development computer (which is what I am assuming) no worries then.
For folders use
scp -r root#yourIp:/home/path/ /pathOfDirectory/
For files
scp -r root#yourIp:/home/path/ /pathOfDirectory/file fileNameCopied
I have a problem whith my installation of docker. When I launch my docker-compose up I have this error :
front_1 | /var/lock/apache2 already exists but is not a directory owned by www-data.
front_1 | Please fix manually. Aborting.
I have this error because I add this line in my dockerfile conf :
RUN usermod -u 1000 www-data
But if I delete this line, my symfony project doesn't work with docker.
Do you have any ideas to solve my problem ?
Best regards
As I see it, you are trying to change UID of user www-data inside docker to have the same ID as host machine user UID (you), so you can open project files in your IDE.
This introduces file permissions problems on apache2 service, which can't read it's own files (config, pid,...), simply because it is not the same user anymore.
Quick 'dirty' solution is to change only owner of symfony project files to UID 1000, but keep group (GID) to the www-data. This applies only for dev machine. Else you don't needed it. Run command inside container.
chown -R 1000:www-data /home/project
You can create some bash alias inside docker to have it at hand.
Other option is to use ACL which will set existing files and folder with permissions, which will get inherited to newly created files under given folder. This could be put to bootstrap script inside container. But only for DEV mode. This way you won't need to run chown.
chown -R 1000:www-data /home/project #set for existing files
/usr/bin/setfacl -R -m u:www-data:rwx -m u:0:rwx -m u:1000:rwx /home/project
/usr/bin/setfacl -dR -m u:www-data:rwx -m u:0:rwx -m u:1000:rwx /home/project
Each -m is for a different user. First is www-data (apache2), second is 0 (root) and third is 1000 (you).
Remember UID can change anytime. So this could create security hole if mentioned users are not having proper UID.
I used second method only for folders, where PHP via apache2 sets permissions (uploaded files, cache,...), but host user needs to access these files.