permission denied while uploading file to EC2 - apache

I have installed apache webserver apache in linux ,I used following code to upload a file, but i get permission denied
scp -i adjmp.pem index.html ec2-user#ec2-50-17-88-33.compute-1.amazonaws.com:/var/www/html/hi
How can i upload a file to EC2 ?

You most likely need to change the filesystem permissions. This is usually done using the chmod command. You may also need to modify file ownership (using the chown command).
If you need help with the exact steps, you can provide the output of the following two commands and I will try to help:
sudo ls -al /var/www/html/hi
id

Related

wordpress installation on vm instance on compute engine on google cloud - file permission errors

I have been asked to look at a wordpress site that is on google cloud - the Wordpress admin works fine - the front end of the site doesn't show the css
I believe it to be a file permission issue
Replicating the site and placing it on a different server with correct wordpress file permissions it works fine.
However on google cloud I have issues with trying to change the file permissions.
I have ftp access using Filezilla but can't change file permissions that way and if I try to use the apache ssh console to change file permissions that wont apply either.
So looking at the owner of the folder var/www/html and the group it is showing as www-data not root - so first question is what should be the correct owner and group ?
To change folder & file permissions and ownership do the following.
SSH into the VM, google cloud provide a SSH browser based terminal.
SSH will open a linux terminal, if you are root user no need to type 'sudo' for the following commands.
Type 'sudo vim /etc/apache2/envvars'
read what the config file says, defaults are:
export APACHE_RUN_USER=www-data
export APACHE_RUN_GROUP=www-data
Exit the config file back to the linux terminal command line.
Type the following commands to give Apache appropriate User and Group permissions in the public wordpress directory, change user and group name as appropriate
sudo chown -R www-data:www-data /var/www/html
sudo find /var/www/html -type d -exec chmod 750 {} \;
sudo find /var/www/html -type f -exec chmod 640 {} \;
You can now exit the SSH terminal. Note if you want to see the new permissions in FileZilla press F5 to refresh FileZilla.

Copy files from a remote server which requires superuser preliveges to local machine [duplicate]

This question already has answers here:
WinSCP connect to Amazon AMI EC2 Instance changing user after login to "root"
(6 answers)
Closed last year.
I am trying to use WinSCP to transfer files over to a Linux Instance from Windows.
I'm using private key for my instance to login to Amazon instance using ec2-user. However ec2-user does not have access to write to the Linux instance
How do I sudo su - to access the root directory and write to the Linux box, using WinSCP or any other file transfer method?
Thanks
I know this is old, but it is actually very possible.
Go to your WinSCP profile (Session > Sites > Site Manager)
Click on Edit > Advanced... > Environment > SFTP
Insert sudo su -c /usr/lib/sftp-server in "SFTP Server" (note this path might be different in your system)
Save and connect
Source
AWS Ubuntu 18.04:
There is an option in WinSCP that does exactly what you are looking for:
AFAIK you can't do that.
What I did at my place of work, is transfer the files to your home (~) folder (or really any folder that you have full permissions in, i.e chmod 777 or variants) via WinSCP, and then SSH to to your linux machine and sudo from there to your destination folder.
Another solution would be to change permissions of the directories you are planning on uploading the files to, so your user (which is without sudo privileges) could write to those dirs.
I would also read about WinSCP Remote Commands for further detail.
Usually all users will have write access to /tmp.
Place the file to /tmp and then login to putty , then you can sudo and copy the file.
I just wanted to mention for SUSE Enterprise server V15.2 on an EC2 Instance the command to add to winSCP SFTP server commands is :
sudo su -c /usr/lib/ssh/sftp-server
I didn't have enough Reputation points to add a comment to the original answer but I had to fish this out so I wanted to add it.
ssh to FreePBX and run the commands stated below in your terminal:
sudo nano -f /etc/sudoers.d/my_config_file
YourUserName ALL=(ALL) NOPASSWD:ALL
sudo systemctl restart sshd
Winscp:
under session login ==> Advanced ==> SFTP
Change SFTP Server to:
sudo /usr/libexec/openssh/sftp-server
I do have the same issue, and I am not sure whether it is possible or not,
tried the above solutions are not worked for me.
for a workaround, I am going with moving the files to my HOME directory, editing and replacing the files with SSH.
Tagging this answer which helped me, might not answer the actual question
If you are using password instead of private key, please refer to this answer for tested working solution on Ubuntu 16.04.5 and 20.04.1
https://stackoverflow.com/a/65466397/2457076

NGINX SSL certificate permission SSL error :0200100D:system

Getting following error in NGINX server, Using LetsEncrypts free SSL Certificate.
2016/06/23 19:53:13 [warn] 5013#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
2016/06/23 19:53:13 [emerg] 5013#0: BIO_new_file("/etc/letsencrypt/live/abc/fullchain.pem") failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/letsencrypt/live/abc/fullchain.pem','r') error:2006D002:BIO routines:BIO_new_file:system lib)
Both problems have one root cause.
This error usually happens, when you try to start nginx with non-root user. You could try to start as root or under sudo.
Looks like you have permission on your pem file, which not allow nginx to read it when you start it from non-root user, you could try to change file permission or start nginx as root or under sudo.
Hello I also had the same similar issue.
setenforce 0
It was solved this way.
If your chmod and chown is all correct on the file, this is probably because you copied a file into a folder - maybe home folder (say) - then mv'd the file into position for NGINX. SeLinux remembers the original file creation location and applies the rights wherever the file is mv'd to - to reset the SeLinux permissions to the current location/file permissions use
restorecon filename
This usually sorts it
If you run nginx worker process from www-data user, it needs just set rights on /etc/letsencrypt/ folder:
chown -R www-data:www-data /etc/letsencrypt/
chmod -R 755 /etc/letsencrypt/
It should works
In my case(Oracle Linux 8), similar issue resolved after changing contexts of .pem files.
$ chcon -t httpd_sys_content_t *.pem
In my case, I solved it by copying the files to the /etc/ssl/certs/ folder and changing the path in nginx.conf

copying file from local machine to Ubuntu 12.04 returning permission denied

How to I grant myself permission to transfer a .crt file from my local machine to the aws ubuntu 12.04 server?
I am using the following command from my machine and receiving a permission denied response.
scp -i /Users/me/key_pair.pem /Users/me/ssl-bundle.crt ubuntu#ec2-50-150-63-20.eu-west-1.compute.amazonaws.com:/etc/ssl/certs/
I am following comodo's instruction. Refer to the heading Configure your nginx Virtual Host from the link. I have not set anything up with regards to permission as user. This is a little new to me and will appreciate further sources of information.
I changed the permission of the path on the server and transferred the file!
With reference to File Permissions , I gave the /etc/ssl/certs/ path the "Add other write & execute" permission by this chmod command when ssh'd into the Ubuntu server:
sudo chmod o+wx /etc/ssl/certs/
Then, on my local machine, the following command copied a file on my directory and transferred it to destination:
scp -i /Users/me/key_pair.pem /Users/me/ssl-bundle.crt ubuntu#ec2-50-150-63-20.eu-west-1.compute.amazonaws.com:/etc/ssl/certs/
It is the write permission you need, and depending on your use case, use the appropriate chmod command.
Simplest way to transfer files from local to ec2 (or) ec2 to local by FileZila.
You can connect with your instance by using Filezila, then transfer files from local to server and vice-versa.

showing apache (php) errors to the web

I'm looking for a way to display errors that apache generates under
/var/log/apache/error_log
for my colleagues who are working on the same project. The coding language is php.
First I tried to do a simple php readfile script, but since the file is only visible too the root user i was unsuccessful. I do not want to use things like cpanel or kloxo. can anyone help ?
To answer this properly we'd need to know a lot more about your setup.
/var/log/apache/error_log
Kind of implies that this some variant of Unix - but which one?
First I tried to do a simple php readfile script
Does that mean you're trying to make the file available via HTTP or do the relevant users have shell accounts?
If its just the permissions problem you need to solve and it's one of the mainstream Linux variants, then it's probably setup to use logrotate, and the config files for logrotate will be in /etc/logrotate.d/httpd or /etc/logrotate.d/apache so go read the man page for logrotate then change the file to something like.....
/var/log/httpd/*log {
missingok
notifempty
sharedscripts
postrotate
create 660 apache webdev
/sbin/service httpd reload > /dev/null 2>/dev/null || true
endscript
}
Here the 'create 660 apache webdev' makes the file owned by apache uid, webdev gid with permission -rw-rw----
That will then take effect for all future log rollovers. You still need to set the permissions on the directory, e.g.
chmod a+rx /var/log/apache
chmod a+rx /var/log
will give everyone read access to the directories
..and change permissions on the current files...
chgrp webdev /var/log/apache/*.log
chmod g+r /var/log/apache/*.log
You have to change error_log's file permissions in order to be able to read it.