Installing a saas on a micro-EC2 centos - apache

I want to install the following SAAS in a EC2: https://github.com/thedevdojo/wave
I installed php8.0, composer and nginx. Then I followed the steps given at the github. Everything worked without an error.
Now I want to link nginx (or httpd) to the folder where the clone of github is. I modified /etc/nginx/nginx.conf and changed the route to the route where my clone is (specifically, until public folder of this git). Then I added at the top of the file user ec2-user ec2-user; beacuse this is the user I used to clone.
However, now I'm having errors from the php-fpm upstream:
connect() to unix:/run/php-fpm/www.sock failed (13: Permission denied) while connecting to upstream
And now I don't know what to do else. I also tried to stop nginx, run httpd, and put all files from the clone (sudo cp routeoftheclone /var/www/html), being the files of the public folder inside html, and the rest out of html (See github project; public is where the index.php is). However if I load the page I get a blank website, and I'm not sure how to log the errors.
I just want to use the saas, I don't care to use apache, nginx or others. What else do I need to do?

Related

Why Lightsail bitnami after LetsEncrypy change index.html location

My Node.Js Bitnami Lightsail instance had its frontend code at /opt/apache/htdocs and http://example.com was working perfectly pointing to that directory (my backend located under opt/projects).
After executing Certbot LetsEncrypt my domain is now pointing to a different folder /var/www/html
Please advise on:
In certbot instruction page I choose Apache for "My HTTP website is running", there wasn't a Bitnami option, was that the right call?
Is this the right configuration and just move my code to html folder?
Does my backend code has to move too? if so where?
Any other well-known issues that I might face?
Cheers.
Bitnami Engineer here,
We do not have any guide to configure certbot and Bitnami, but we have a guide that helps you configure the Let's Encrypt SSL certificate using lego. We have a tool that configures everything so you do not need to worry about editing the Apache's conf files or setting the renew process.
sudo /opt/bitnami/bncert-tool
You can learn more about it here.
In case you want to manually create a SSL certificate, you can also run the lego tool directly
sudo /opt/bitnami/letsencrypt/lego --tls --email="EMAIL-ADDRESS" --domains="DOMAIN" --domains="www.DOMAIN" --path="/opt/bitnami/letsencrypt" run
You will later need to configure the Apache's conf files to use that new certificate file. You can learn more about it here
Note: If you used certbot and it modified the Apache's configuration, you will need to undo those changes to use the proper folder. You will need to review the /opt/bitnami/apache2/conf/httpd.conf, /opt/bitnami/apache2/conf/bitnami/bitnami.conf and /opt/bitnami/apache2/conf/vhosts/* files

Laravel Ubuntu 16.04 returning status code 500

I just deployed a Laravel project I developed at localhost (using XAMPP) to a server.
I uploaded all files, created a new .env file (also got an app-key), ran composer install, created the db, ran artisan migrate.
Also, in
/etc/apache2/sites-enabled/000-default.conf I set the DocumentRoot to the public directory of laravel. Also I tried to edit the apache2.conf to include the directory, with Allowoverride All.
But entering the ip of my server, I get redirect to the login page (of course, I'm using the Auth of Laravel).
But I'm getting the error:
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
I tried to follow those steps, but it's all the same. Except the one point, that in my apache2.conf I don't wrote the DocumentRoot anymore, as I already set it in the 000-default.conf. I also tried to set it a second time in the apache2.conf, but this changed nothing.
So what couuld the problem be?
I'm using Ubuntu 16.04.
PS: Something seems to happen inside of laravel. When I go an existing route, I always get this error code 500. When I go to a route, that doesn't exist, I get an error (NotFoundHttpException), so the Routing itself kinda seems to work, but where does this error come from? It are exactly the local files I'm using with XAMPP, and locally it works fine... Any ideas?
EDIT: After editing my logs directory to have permissions like 777, I'm getting 2 errors as well.
Both point to storage/framework/views/ and storage/framework/sessions, saying "Permission denied". Do I just have to run chmod for those directories as well?
Check the error in storage/logs/laravel.log if there is no log there then make sure your directory permissions for storage allows it to be writeable 775 is what I normally use.
Okay, I got it, basically it was the permissions.
I now changed the owner of the storage folder to www-data. Now it's working (for anyone that might experience this problem, too).
if you use vps ubuntu, you have to set write/read permission with root folder (and all file, folder of it) by using this command on putty:
chown -R www-data:www-data /var/www/html
i tried this command, and my website doest not show 500 error

Apache2 in docker container gives 403 on statically served files

I'm having a weird issue and am looking for ideas.
I'm running an apache2 debian image that serves some static files, and has a few redirect rules.
Running a container works fine, but results in 403 (permission denied) errors on any request (curl as well as browser).
When I then exec into the container and perform an ls in a static file folder (such as css), those files are being served correctly on the next request.
My current workaround is a startup script with a find /var/www/html/ -name '*'. This makes the container work as expected, with all the served files being accessible.
All the files have the correct owner (www-data) and permissions.
Docker version 1.7.1, but issue appeared also on 1.7.0
I'm running an Ubuntu VM, but my colleague reproduced this on a mac with docker machine.
Whatever could be a reason for this behaviour?

permission Apache PHP public_html

I'm newbie in the web development, and I'm trying to make a website. So, my website works fine on the server but not on my own (Apache). My sources are in ~/public_html/. The problem is I don't have permissions on the sub/sub directory, I mean, when it's a directory, it works fine, when it's directory in a directory, it doesn't. (403 Forbidden). I don't change my default Apache configuration excluding for include mysql and php.
All my directories have the same permissions. Maybe I need to configure something for that?
Thanks
I used this little script found at this link http://boomshadow.net/tech/fixes/fixperms-script/
Fixperms – for one single user
To use the fixperms script, simply log into your server as root, wget the file from our server, then run it. Type in the cPanel username and it will run only for that particular account.
It does not matter which directory you are in when you run fixperms. You can be in the user’s home directory, the server root, etc. The script will not affect anything outside of the particular user’s folder.
Should be done in SSH
root#example [~]# wget boomshadow.net/tools-utils/fixperms.sh
root#example [~]# sh ./fixperms.sh -a USER-NAME
Fixperms – for all of the users
If you would like fix the permissions for every user on your cPanel server, simply use the ‘-all’ option:
root#example [~]# wget boomshadow.net/tools-utils/fixperms.sh
root#example [~]#sh ./fixperms.sh -all

Not following symbolic link after changing remote server PHP

After changing remote servers (but not the content being hosted in it), my symbolic links are no longer being followed by apache through virtual hosts.
When I go into the terminal and perform ls -alt it shows that the symlink is there and correct.
The path where the symlink is going to (and suppose to be going to is correct) and the content expected is still there.
I have performed svn switch on the root of the content that the symlink is going to so its updated to the current server.
I have checked and opened up all file permissions for the content and subdirectories
I have tried svn switch on the content that s symlinking to the shared content, but am presented with this error:
'.' appears to be part of version 1.7, please upgrade your client version.
I deleted the folder with symlink and re-checked it out through the new server, this is where the symlink doesnt work anymore.
Some of my older projects that were checked out through the old server do follow the symlink to the content with the svn re-directed to the new server.
Also my virtual host which states the option to follow symlinks has multiple places where the same symlink path is used. Each folder inside this vhost has the same substructure to it, but oddly some symlinks work, and others dont.
Any ideas on what I could try to get apache to follow the sym links?
Thanks a lot
Following symlinks is OFF by default on most Apache installs, because they're a security risk - they allow easy violation of document root "jails". if you allow symlinks, it's trivial to have something like ln -s / /site/documentroot/browse and now your entire filesystem is open for viewing by the world.
If you insist on allowing them, then you'll need
Options +FollowSymLinks
in the appropriate <directory>, <virtualhost> or .htaccess1. Relevant docs: http://httpd.apache.org/docs/current/mod/core.html#options