give non-sudo write access to smb share mounted via fstab and x-systemd.automount - raspbian

I'm running raspbian stretch
on there I have some mounts in /etc/fstab
one of those should be writable by a user (myguest:myguest) without sudo (it's used for sftp)
//192.168.x.y /path/to/mountpoint cifs defaults,rw,nofail,username=smbuser,password=smbpassword,noauto,x-systemd.automount,x-systemd.required=network-online.target 0 0
I tried setting the mountpoint directory owner:group to myguest:myguest but as soon as it gets mounted it's back to root:root
I also tried adding these in stab
users,uid=1001,gid=1001,permissions
but that didn't change a thing.

Related

Can't access VM Instance through SSH

I was connected to the VM instance through SSH and by mistake I ran the following command:
"chmod -R 755 /usr"
And then I started getting the following message:
"/usr/bin/sudo must be owned by uid 0 and have the setuid bit set"
I have read different solutions for it:
Setting a startup-script to change root password and connect through
gcloud beta compute ssh servername
However, I can't stop my instance because I have a local SSD assigned to it, so I don't think the startup-script will work and connecting through ssh asks me for a password:
user#compute.3353656325014536575's password:
But I have never set a password for the user I am using.
Is there any solution so I can connect again to the server and fix the mistake?
Edit:
I have a user which I created manually for an FTP, however this one doesn't have sudo permissions, is there a way to know the sudo password?
Thanks in advance.
From the issue at hand the command chimed-R 755 will give everybody root permission
Try this first before reading other steps down.
Ssh into your instance. To change password
Just type
Sudo passwd
Type new password
And confirm new password.
If that doesn't work,
Follow the steps below
"/usr/bin/sudo must be owned by uid 0 and have the setuid bit set"
This means the sudo root permission has been over written, which creates restriction of using sudo and it leads you into problems like all the root access you lost. The following steps should help resolve the issue
create a backup or snapshot of your instance
Create a totally new instance and detach your local ssd from the last instance attach, it back to the newly created instance.
Login to new instance and create a new folder in the root, and start operation as root.
check the attached drive in new instance : “mount”…… “fdisk -l | grep Disk”.
Create new folder in root directory :
“mkdir /newfolder”
Now mount the vol : “sudo mount /dev/xvdf1 /newfolder/”
After mount if you check the permission you will see that the newfolder permission got changed after mounting because of the effected volume:

Always permission 777 on mount shared cifs

I have a little problem when I mount a SMB shared folder from a Synology NAS.
I want to mount a shared folder with permissions: git:root 700
But the mounted folder always have permission set to 777 ( even after a chmod 700 without error)
In my /etc/fstab I used this line:
#uid=999 ---> git user
//server/folder /mnt/artifacts cifs username=windowsUser,password=xxxxx,gid=0,uid=999,file_mode=0700,dir_mode=0700,iocharset=utf8 0 0
Do you know why I cannot set my rights to 700 ?
I did a mistake ? Something stupid ?
Thanks in advance for your help ;)
If the remote machine user ID and the local machine user ID do not match, the permissions will default to 777. Mount.cifs doesn't support umask, so instead "noperm" option can be used. This way even if the permissions of the users on the local and remote machines don't match, the user will still be allowed to read and write to the folder, the equivalent of umask=000.
//address/location /mount/location cifs username=username,password=password,noperm,vers=2.0 0 0
a good start is to check out the manpage for CIFS:
$ man mount.cifs
[...]
file_mode=arg
If the server does not support the CIFS Unix extensions this overrides the default file mode.
dir_mode=arg
If the server does not support the CIFS Unix extensions this overrides the default mode for directories.
[...]
nounix
Disable the CIFS Unix Extensions for this mount.
[...]
So since the file_mode (and dir_mode) seem to only work if the server does not support the CIFS Unix extensions, i would start by disabling them (via the nounix option)
Adding nounix worked just fine. For information, the line I have in /etc/fstab is :
//server/share /mnt/folder cifs credentials=/home/yannick/.smbcredentials,iocharset=utf8,sec=ntlm,vers=1.0,uid=1000,gid=1000,file_mode=0644,dir_mode=0755,nounix 0 0
with 1000 being my user id and group id.
Inside .smbcredentials, I have this :
username=<distant login>
password=<distant password>
I try to mount a CIFS share with permissions only for root. Other users should not be able to even list any files.
Therefore I used the following fstab entry:
//192.168.0.100/DRV /mnt/DRV cifs user=user,pass=pass,uid=0,gid=0,nounix,file_mode=0007,dir_mode=0007 0 0
I also tried the noperm parameter.
In detail I created the folder with this permissions:
drwxrwx--- 2 root root 4096 Mai 14 09:09 DRV
After mounting the network share, the folder have:
d------rwx 2 root root 4096 Mai 14 04:50 W
Your problem is a very common one. You are using incorrect tags to be able to change the file permissions of the mounted folder.
You need to add 'umask=', instead of 'file_mode=700' and 'dir_mode=700' as it is using system mount options not CIFS's options.
To do this you can use:
//address/location /mount/location cifs credentials=/location,uid=id,gid=id,umask=700 0 0
This will mount the file share under the set file permissions.
For security I would recommend using a credentials file, which contains the username and password, and must be set as read only.

Using lsync to sync apache webroot files - running into permission issues

I'm distributing load between two web servers, which means all of the Apache settings and vhosts are pretty much identical, and I wanted to make sure they stay that way by using LSync (or if there's another solution that helps with the problem I'm having, let me know)
So obviously Apache runs as the apache user, and we cant enable root SSH logins, so I created an lsync user that can SSH between the two servers using RSA keys.
And now I'm running into some permissions errors, which is kinda what I expected to happen really. What I'm trying now is I added the lsync user to the apache group, and the apache user to the lsync group... and that seems to work ok, as long as the files are chowned 7 for both the user and the group...
I thought about setting a cron job to chown apache.apache every so often, and maybe even chmod +rwx for the group and user, but I'm sure that would cause some other issues.
I thought about having lsync run as the apache user, but it looks like the apache home directory needs to actually be owned by root.root.. so that would cause issues with the apache user trying to ssh in and read from the .ssh directory.
I couldn't find much about this when I looked on Google... Most people just used the root user for lsync, which is out of the question.
So if anyone has a fix, that would be great! thanks
P.S. I know that I can allow the lsync user to execute specific commands via sudo, if I properly configure the sudoers configuration... is there a way to have it sudo chown apache.apache /var/www && sudo chmod -R u+rwx /var/www or something?
rsync has an option for forcing the permissions of the files it creates on the destination: --chmod=<blah>. lsyncd does not have direct support for this, but can pass-through rsync flags.
Try adding this to your lsyncd configuration:
_extra = {"--chmod=Dug+rwx,Fug+rw"}
That should ensure that directories, D, have read/write/execute permissions for owner and group, and files, F, have read/write permissions for owner and group. Any other permissions should be set as they are on the source server.
If you need the files to be owned by the apache user then you could set up a chown cron job, as you suggest, but you might find that a constantly running script that reads the output from inotifywatch will be more responsive (and mostly idle).
You might consider having the apache user run an rsync daemon. It's little used since tunnelling rsync through ssh is more convenient and more secure, but it might help you side-step this problem.
You need to set up a configuration file, and then simply launch it with rsync --daemon using whatever init system your distro has.
You can then configure your lsynd with target = "rsync://server/path".
If the connection between the servers is local and the network is trusted then you're done, otherwise you should configure the rsync daemon to listen only on 127.0.0.1, and then use an ssh -L port mapping to route the traffic through an encrypted tunnel (the owner of the tunnel is not important).

Amazon S3 (S3FS) Owncloud permissions

I got my bucket mounted and everythings works fine. My Fstab is like this
mybucket /path/to/mountpoint fuse.s3fs _netdev,allow_other 0 0
But when i hit the Owncloud homepage it tells me i have to set the chmod 0770 to the directory. But the S3fs mount can't be chmodded at all.
Removing the allow_other is not either working, because only root has access to the mount.
Well 10 months passed after you posted your issue, but I just found myself in the same situation, here is the solution for someone who needs it:
On the /etc/fstab file just add the auto-mount entry:
s3fs#YOUR_S3_BUCKET_NAME /YOUR_MOUNTPOINT fuse allow_other,use_cache=/tmp/cache,uid=apache,gid=apache,umask=007 0 0
In the case above it is mounting it with the ownership of apache user and group. umask=007 does the trick and set the permissions on the folder that you need.

Running EC2 instance suddenly refuses SSH connection

I've set up the EC2 instance couple days ago and even last night I was able to SSH to it with no problems. Today morning, I can't ssh to it. Port 22 is already open in the security group and I haven't changed anything since last night.
Error:
ssh: connect to host [ip address] port 22: Connection refused
I had similar issue recently and i couldn't figure out why it was happening, so I had to create a new instance, set it up again, and connect and configure all EBS storages to the new one. Took me couple hours... and now it's happening again. In the previous one, I've installed denyhost, which might have blocked me, but in the current one, there are only apache2, and mysql running.
The current instance has been up for 16 hours now, so I don't think it's because it didn't finish booting... Also, port 22 is open to all sources (0.0.0.0/0) and is using tcp protocol.
Any ideas?
Thanks.
With the help of #abhi.gupta200297, we were able to resolve it.
The issue was the error in /etc/fstab, and sshd was supposed to be started after fstab is successful. But it wasn't, hence, the sshd wouldn't start and that's why it was refusing the connection. Solution was to create a temporary instance, mount the root EBS from the original instance, and comment out stuff from the fstab and voila, it's letting me connect again. And for the future, I just stopped using fstab and created bunch of shell commands to mount the EBS volumes to directories and added them in /etc/init.d/ebs-init-mount file and then run update-rc.d ebs-init-mount defaults to initialize the file and I'm no longer having issues with locked ssh.
UPDATE 4/23/2015
Amazon team created a video tutorial of similar issue and show how to debug using this method: https://www.youtube.com/watch?v=_P29ZHu_feU
Looks like sshd might have stopped for some reason. Is the instance EBS backed? if thats the case, try shutting it down and starting it back up. That should solve the problem.
Also, are you able to ssh from AWS web console? They have a java plugin there to ssh into the instance.
For those of you who came across this post because you are unable to SSH into your EC2 instance after a reboot, this is cross-posted to a similar question at serverfault:
From the AWS Developer Forum post on this topic:
Try stopping the broken instance, detaching the EBS volume, and
attaching it as a secondary volume to another instance. Once you've
mounted the broken volume somewhere on the other instance, check the
/etc/sshd_config file (near the bottom). I had a few RHEL instances
where Yum scrogged the sshd_config inserting duplicate lines at the
bottom that caused sshd to fail on startup because of syntax errors.
Once you've fixed it, just unmount the volume, detach, reattach to
your other instance and fire it back up again.
Let's break this down, with links to the AWS documentation:
Stop the broken instance and detach the EBS (root) volume by going into the EC2 Management Console, clicking on "Elastic Block Store" > "Volumes", the right-clicking on the volume associated with the instance you stopped.
Start a new instance in the same region and of the same OS as the broken instance then attach the original EBS root volume as a secondary volume to your new instance. The commands in step 4 below assume you mount the volume to a folder called "data".
Once you've mounted the broken volume somewhere on the other instance,
check the "/etc/sshd_config" file for the duplicate entries by issuing these commands:
cd /etc/ssh
sudo nano sshd_config
ctrl-v a bunch of times to get to the bottom of the file
ctrl-k all the lines at the bottom mentioning "PermitRootLogin without-password" and "UseDNS no"
ctrl-x and Y to save and exit the edited file
#Telegard points out (in his comment) that we've only fixed the symptom. We can fix the cause by commenting out the 3 related lines in the "/etc/rc.local" file. So:
cd /etc
sudo nano rc.local
look for the "PermitRootLogin..." lines and delete them
ctrl-x and Y to save and exit the edited file
Once you've fixed it, just unmount the volume,
detach by going into the EC2 Management Console, clicking on "Elastic Block Store" > "Volumes", the right-clicking on the volume associated with the instance you stopped,
reattach to your other instance and
fire it back up again.
This happened to me on a Red Hat EC2 instance because these two lines were being automatically appended the end of the /etc/ssh/sshd_config file every time I launched my instance:
PermitRootLogin without-passwordUseDNS no
One of these append operations was done without a line break, so the tail of the sshd_config file looked like:
PermitRootLogin without-passwordUseDNS noPermitRootLogin without-passwordUseDNS no
That caused sshd to fail to start on the next launch. I think this was caused by the bug reported here: https://bugzilla.redhat.com/show_bug.cgi?id=956531 The solution was to remove all the duplicate entries at the bottom of the sshd_config file, and add extra line breaks at the end.
Go to your AWS management console > select instance > right click and select "Get System Logs"
This will list what went wrong.
Had the same issue, but sys logs had this:
Starting sshd: /var/empty/sshd must be owned by root and not group or world-writable.
[FAILED]
Used the same steps described above to detach volume and attach to connectable instance. Then used:
sudo chmod 755 /var/empty/sshd
sudo chown root:root /var/empty/sshd
(https://support.microsoft.com/en-us/help/4092816/ssh-fails-because-var-empty-sshd-is-not-owned-by-root-and-is-not-group)
Then detached and reattached to original EC2 Instance and could now access via ssh.
I got similar ssh locked out by detach an EBS but forgot to modify the /etc/fstab
If your ubuntu has systemd, you can edit /lib/systemd/system/local-fs.target and comment out the last two lines:
#OnFailure=emergency.target
#OnFailureJobMode=replace-irreversibly
I haven't tested this extensively and don't know if there are any risks or side effects involved, but so far it works like a charm. It mounts the root volume and all other volumes (except those that are misconfigured, obviously), then it continues the boot process until SSH is up, so you can connect to the instance and fix the incorrect fstab entries.
In my case, the volume was out of space and a service was failing to start. I used the AWS tutorial (from Sherzod's post) to mount it on a good EC2 instance and clean it up and remove the service from startup before remounting it and verifying that things worked.
For me it was that my IP had changed. Hope this helps someone. Navigate to the security groups and update your My IP in the inbound rules.
I had the same issue not able to connect to the aws instance with permission denied error.
I was able to connect with aws team on screen share call and they guided me to change the folder permission on the aws instance using the following user meta script.
steps :
stop the instance
Actions > Instance setting > Edit user meta
Enter the below script and save
**Content-Type: multipart/mixed; boundary="//" MIME-Version: 1.0
--// Content-Type: text/cloud-config; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config cloud_final_modules:
[scripts-user, always]
--// Content-Type:
text/x-shellscript; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash chown root:root /home chmod 755 /home chmod 700 /home/ubuntu chmod 700 /home/ubuntu/.ssh chmod 600 /home/ubuntu/.ssh/authorized_keys ls -ld /home /home/ubuntu /home/ubuntu/.ssh /home/ubuntu/.ssh/authorized_keys chown ubuntu:ubuntu /home/ubuntu -R
--//**
save and connect to the instance with correct pem key.
Resolved my problem
*change ubuntu to your instance username