I am trying to do:
scp ec2-user#php-pos-web-1:/var/log/httpd/* tmp/php-pos-web-1;
I get permission denied because those files are owned by root. I don't want to enable root login. Is it ok if I change the group for that folder? What is the best way?
I am on amazon linux
It is generally not a good idea to change the permissions to system folders. They are not world-readable for a reason. They can expose potentially sensitive data (accessed pages, clients IP addresses, ...).
You should on the other way copy the files to some some accessible location (/tmp) using sudo and then copy them over the network back.
Related
I have just purchased a dedicated server from a UK hosting company that uses cPanel and I have root access
I am using scp to copy a huge (> 2tb) website from another hosting company (1&1 IONOS using Plesk not that it should make any difference)
The files are copying over .. using SSH I can use the "ls" command to list all the files that I've copied over
However, when I use the File Manager option via cPanel interface, I can see the first folder name on the left hand side (i.e. public_html/my-copied-site) but on the right hand window it shows the directory as empty
If I use the "ls" command, I can see the files & folders
if I try an access any of the files directly via a web browser then I get a 403 Forbidden message
What have I done wrong?
The answer to this problem is the ownership of the folder
Using scp over SSH meant that I was logged in as "root" and therefore the owner of the folders was also "root"
Changing the owner of the folder (using "chown" command) to the account's name resolved the problem
Hope this helps someone out
After successfully mounting the directory (ZFS remote storage) from one of the server, I'm getting an "Operation not permitted" error when I try changing the ownership of the directory. I'm using the following command:
To mount the remote directory:
mount -t nfs 10.1.32.33:/dir/temp/tools /home/materials
After mounting the directory, the contents are belongs to nobody:nobody
I want to change ownership so I can run the installer inside the directory.
I'm using the command below to change ownership but it's not working:
chown -R otm:otm materials/
I can always upload the file to the server without using the ZFS storage, however I want to start making a central installer repository so I don't need to upload the files/installers for future server install. I appreciate your help guys.
NFS servers by default do not allow root access to files - root is normally mapped to "nobody".
See "root squash":
Root squash[2][3] is a reduction of the access rights for the remote
superuser (root) when using identity authentication (local user is the
same as remote user). It is primarily a feature of NFS but may be
available on other systems as well.
This problem arises when a remote file system is shared by multiple
users. These users belong to one or multiple groups. In Unix, every
file and folder normally has separate permissions (read, write,
execute) for the owner (normally the creator of the file), for the
group to which the owner belongs, and for the "world" (all other
users). This allows restriction of read and write access only to the
authorized users while in general the NFS server must also be
protected by firewall.
A superuser has more rights than an ordinary user, being able to
change the file ownership, set arbitrary permissions, and access all
protected content. Even users that do need to have root access to
individual workstations may not be authorized for the similar actions
on a shared file system. Root squash reduces rights of the remote
root, making one no longer superuser. On UNIX like systems, root
squash option can be turned on and off in /etc/exports file on a
server side.
After implementing the root squash, the authorized superuser performs
restricted actions after logging into an NFS server directly and not
just by mounting the exported NFS folder.
In general, you DO NOT want to disable root squash unless you REALLY know what you're doing as there are serious security issues you can create if you do that. And since you didn't even know it exists...
(And that mention of /etc/exports is an extremely limited statement that is wrong on many systems - like Solaris.)
I've been working with the server only for 2 days so I am sorry if that is simple question. I looked everywhere, but didn't find an answer.
So I have a Google compute engine account and I have owner privileges. When I run
gcloud compute ssh instance --zone us-central1-a
it works, but it creates a key with username that it takes from my computer account.
So when I am in google shell I can add or remove files using sudo. But when I go to Filezilla I have to use ssh file key and username from that key. And the only folder that accessible with that username is it's own folder. I am not sure what is the problem so I gave all the facts I could.
I'm not entirely sure I'm answering the right question, but I'll take a stab at it. The ssh keys created by/used by gcloud are specific to a particular linux user on your VM. As you note, you can use sudo when ssh'd in to edit files/directories owned by different users---the way this works is that you (roughly speaking) temporarily switch users to root when doing the file edit.
An scp client like Filezilla isn't going to be able to switch users that way. So you'll need a different technique to edit files with Filezilla.
I suggest ssh-ing in to your vm and using chmod or chown to change the ownership of files/directories that you want to use with Filezilla. Alternatively you could you use useradd -G to add you username to a group that can edit the files you care about.
Exactly what you'll do depends on the security policy you want to enforce for your files, but there a lots of decent options. The key test to run---can you get to a state where you can edit the files when logged in with SSH, but not using sudo? If so then you should be able to edit the files with Filezilla.
Our server is running under CentOS 6 and handled over Panel Plesk 10.4.4. Structure of folders and files is created using php script. Then, when accessing through FTP we are unable to modify these folder contents previously created. When accessing it over Apache web user works without exception but not over ftp. Folders and files have 755 and 644 rights respectively. How to enable ftp acces? Thank you
EDIT: problem is that file owner and ftp are not the same but I do not know exactly how and where to attach it.
File and folders owner is psacln (gid 502) and group is apache (gid 503). Ftp users are not the same.
We add a login ftp user (also system one) to the group owner of files and folders "psacln" using usermod -a -G psacln ftpusername. Same procedure with apache group but problem persists.
The problem here would be that you probably run your site in mod_php mode. In this mode scripts are operated under Apache privileges, so all files and directories created are owned by Apache. This way the files cannot be accessed by your FTP user unless you set up 777 or 666 permissions.
I think your options could be
switch to FastCGI mode of PHP. Depending on your Plesk account privileges, you can either do it yourself in Plesk UI or will have to ask hosting provider for that.
This way your script will be operated under user privileges (same as FTP user) and there will be no problems with accessing these files through FTP. Also this option is often considered more secure.
make PHP script setting 777 permissions on your folders and 666 permissions on your files. It means you allow to modify them by everyone (so called "others"). So FTP user can modify these files as well. While this may sound insecure, but practically these files are already can be accessed from any other site on that system (if it is shared hosting server). So I don't think it will be any more insecure than the current status.
Regards
I guess I'm being a little hesitant but I deal with vcs's occasionally and always get asked for some sort of prompt, of course I'm attempting to access an external machine which I'm sshing into.
Basically my question is, say I don't have root access on this machine, would it still be possible to set this up? I've skimmed through reading it a couple times and I'm pretty sure I got the method down - you generate pub/private keys, sftp to the machine and throw your public into some authorized_keys directory. How is this managed with multiple users for example? Could the generic file name ( the .pub ) get overwritten, or am I completely misunderstanding the process here and it's setup to allow multiple keys natively?
If I'm not a sudoer and one of the server's directories needs to be chmod'd to say 700 whereas it's 655, I can't really do anything other than ask for su access, right?
If you have ssh access to the remote machine, you can generate the key pair on your local machine, add the public key to the authorized_users file on the remote machine, and then use this for authentication. You don't need root privileges to do this. The keys and authorized_files usually reside under your home directory ( myhome/.ssh/authorized_keys etc) so they don't get confused between users.
Your questions about setting directory permissions is unrelated, but if you own the directory or its parent (or its parent...) you will be able to set any permissions on the file in that directory.
Sounds to me like it might be time to curl up with a general *nix administration book, perhaps? Not light reading, but it can be useful and I always find it most informative to learn the details when I'm actually struggling with them.
I ssh all the time into a machine that allows su or sudo. But, it's set up not to allow ssh via "ssh root#machine". So to answer your question, yes it's possible.
You can only change the directory permissions if you own the directory or if you have root access.