Apache2 Not Responding: Bitnami Magento Install (Legacy) - apache

For reasons too insane to even go into, I am attempting to install using the Bitnami Magento 1.9.2.4 image on a fresh Amazon AWS/Lightsail Ubuntu 16.04 instance (2gbs to avoid complaints and be sure I don't run into anything unnecessary).
I think this is really more of an Apache question. After I finish the install (success), I can't get the server to respond via the instance IP address at the default port (8080).
Regarding the old Bitnami Image, you can get (or wget) that Magento 1.9.2.4 image still, it's over here:
wget "https://downloads.bitnami.com/files/stacks/magento/1.9.2.4-3/bitnami-magento-1.9.2.4-3-linux-x64-installer.run"
So for the sake of anyone who's trying to work through the whole process, once you pull the above down to your instance you need to chmod the above file to 755. This assumes you are in the directory with your download:
chmod 755 bitnami-magento-1.9.2.4-3-linux-x64-installer.run
Then run it using it's full path, like:
/home/ubuntu/bitnami-magento-1.9.2.4-3-linux-x64-installer.run
So the install is going to ask a bunch of questions, for anyone keeping track my answers were all yes (ie. yes to Git, PhpMyAdmin, Beetailer... whatever that is).
Then I created an admin user / password etc.
As far as the port I didn't have anything running on 8080 so the install defaulted the port to 8080 with HTTPS on 8443 with MySQL on 3306 (more on ports in a minute).
I think Host/Domain is one of the keys to this problem. When I couldn't get the server to respond I just recreated an instance and tried a different Domain during the install process. I tried: internal AWS IP, External ACTUAL IP, 127.0.0.1
Here's what the Magento 1.9 Domain prompt looks like:
So basically that sort of brings us up to date.
Once I finished the install, like a normal human used to using bitnami as a cloud image I assumed the server would respond at whatever the default path was at the IP address it was running on. Ie:
BASEIPADDRESS:8080/magento
Not the case. When I hit that the server does NOT respond, hence the question. In addition to the above I have also tried the BASEIPADDRESS, and the BASEIPADDRESS:8080
Results checking open ports
So since the server is not responding I figured I would check the ports.
First I checked using netstat:
netstat -lntu
I got back:
Then I realized that netstat is now depreciated... so I went with:
ss -lntu
I got back:
(excuse the images, formatting wouldn't work for text)
To me it looks like 8080 (default) is open in both of those results. So why isn't the server responding at the default location?
#Bitnami Status = OK
Checking the status with:
/home/ubuntu/magento-1.9.2.4-3/ctlscript.sh status
Everything looks good:
apache already running
mysql already running
Memcached not running
Since it says Memcached was not running, I started memcached to see if that was the issue, no it was not.
I can access the instance via SSH and yes I am sure the IP is right. See images above.
I also posted this to the Bitnami community but haven't heard anything over there. Will cross populate as I get ideas.

It looks to me that you configured Magento using the private IP address, so you would not be able to access from your browser. A way to check it is by executing the following command in your machine:
curl -L 127.0.0.1:8080/magento
If that provides output, then the IP is misconfigured, so you would need to reinstall using the proper IP

So this ended up being PRIMARILY due to not running the Bitnami stack installer as root / sudo:
sudo /home/ubuntu/bitnami-magento-1.9.2.4-3-linux-x64-installer.run
Why Install with Sudo on AWS/Lightsail?
So the reason you need to install as sudo has to do with the fact that when run as the normal user (ie. not root) the installer defaults to port 8080 which is NOT open on aws by default. To complicate matters further you may not be able to get things running properly even if you manually swap to port 80 AFTER you run the installer.
To avoid a scenario where port 80 requires root access to utilize I just re-created my instance and ran the installer as root with the above command.
Host Setting
During install I selected the public IP for the "Host" prompt and everything worked as I thought it might (straight out of the box).
Thanks to Javier Salermon who put me on the right track and the devs at Bitnami for cueing me into the fact that 8080 is not open by default.

Related

How can I change the port for Webmin access?

I'm managing a Rails website that relies on MySQL tables. I used to be able to access the tables with Webmin by going to xxx.xxx.xxx.xxx:10000 . At some point, I lost the ability to access it this way.
I'm not great with SSH, but I was able to SSH into my website as a general user. Then, I switched to Root user. Then, I ran this command to restart my Webmin:
svcadm restart webmin
It seemed to accept the command, but I'm not really sure how to check. Anyway, I started getting a new error message when I tried to access Webmin through my browser:
ERR_SSL_VERSION_OR_CIPHER_MISMATCH
The website uses a connection with Cloudflare. My guess was that this could be the problem since Cloudflare doesn’t support port 10000. I tried deactivating Cloudflare, but it didn't resolve the problem.
Still, I figured I might need to use SSH to change the port for Webmin access just in case. I found a source that suggested how to do it, and it starts with SSH. I went in through SSH and I switched to Root user. Then, I ran this command:
/etc/webmin/miniserv.conf
And it tells me
bash /etc/webmin/miniserv.conf Permission denied
How can I get past this step? Am I headed the wrong direction to get access back to Webmin? Thank you!

I want to turn my localhost server into a real website

I created an application that runs on a localhost server using expressjs. And I also bought a domain.
I'm wondering if there is a way to take that localhost server and turn it into a real shared server
I tried once to use a hosting service like hostgator but I still don't know how I can turn the express app into a real website.
I have no experience with any web development services so please don't tell me to use ....... whatever because I will have no idea what that is.
For one thing it is not clear how your website actually works: if it is only express does it generate HTML or is it purely JSON passed to browser clients via get requests (to each their own).
There are so many options as to how you might do this: one of the best options is to first make sure your server runs on Docker. Find a tutorial on YouTube/google/Stack Overflow/Blogs on how to run your Express server with docker. If you do that you can deploy it to a Container manager like Google/Amazon/Digital Ocean. If this seems hard to you there are other options.
Presumably you run your server with something like npm start. This guide can show you how to do essentially that but on a cloud computer.
Before you begin make sure that you're locally working server is checked in to a cloud Git provider like Github, GitLab, Bitbucket, etc.
Since Amazon AWS, and Google Cloud have free tier or options for hosting for free for a certain amount of time (AWS 1 year) or for a certain amount of money (Google Cloud). These two seem like viable place to start.
If you find the option that you'd like you'll need to:
create an account
Create a server (choose a cheap one especially initially like mice/small/cheap etc).
Find a tutorial on how to "SSH" into that server (which basically means remotely control the terminal on that server). Google actually makes this fairly easy there's a big button that says SSH into this server.
Once you've logged into that Computer you'll be able to run the same commands you probably normally do on your home computer:
The computer you'll be getting is likely to be a virtual Linux Computer probably something like Linux Ubuntu. Find a tutorial on how to get git and node installed there (but it's something like sudo apt-get update && sudo apt-get install git node).
Once you have git and node try mkdir www and cd into that: mkdir www && cd project (This isn't critical but conventional.)
Copy the link that allows you to "Clone your repo using HTTPS" (there's a link at the top right of your GitHub (or others) repo that allows you to do that. You'll need to enter your password
Now all the files that you had on your computer are on this new computer.
Next you'll have to probably npm i to install your dependent NPM packages. (This assumes you properly used .gitignore to prevent GitHub from being filled with extra copies of your npm packages.)
Now you should be able to run your code as usual: npm run start
If all those steps work you'll want to get something that will run these "forever" like https://www.npmjs.com/package/forever npm i -g forever or even better: https://www.npmjs.com/package/pm2 will allow you to continuously run your express server.
Finally, you'll need to configure this server on AWS/Google/whatever service you're using to push traffic coming in on port 80 and 443 to port 3000 and open traffic to all. And depending on the service you chose that's different so find a tutorial for doing just that part.
This will only allow people across the internet to see your service on an AWS URL or a google URL. But it's a good chance to make sure everything works perfectly. Once you're happy with everything associate your purchased domain with that special AWS/Google domain. You can do that on the AWS side, or the GoDaddy/NameCheap/where-ever you bought your domain side.
For the docker option you can download aws-cli tools and upload your built docker container to AWS and have it available. Find a tutorial to do that.
Essentially your question is very broad so I sometimes brushed over some details, but this is essentially what you have to do.

Session is lost in apache after nginx proxy switch

I am building a docker which i can use for my work. I am using MacOS. If I create docker container with xdebug installed (Ubuntu 16, php7.2, xdebug, apache), code execution is extremely slow even if i am not listening to xdebug port. I have already get rid of 'mounts'.
So I decided to created something like this:
docker structure
And everything works just like i want. When i change cookie in browser, my wesbite works fast, but when i change cookie to another one, i am able to debug. But i am facing a problem that it logs out me when i change that cookie value and nginx proxies me to another server. (Each apache is a single docker container with ubuntu and apache)
So my question is if there is a workaround for this so I could share session between server that I don't get logged out? Or at least any ideas what needs to be changed in that scheme.
P.S. My project is Magento 2 and probably the source of an issue is in this one. But i actually don't think so.
According to https://www.nginx.com/products/nginx/load-balancing/, the sticky session is a feature of nginx plus.

How to access Apache with CentOS Server running on a VM under Windows7

Under Windows7 I am running CentOS-6.2-x86_64-server (on VM) having Apache2 with php5 and mysql installed. The vm is working fine, apache and mysql are started.
Now I want to access a webpage on the vm host being opened by a browser under Windows7.
I get following message:
"Forbidden. You don't have permission to access /index.html on this server."
My windows firewall is activated. Via Windows console I pinged the VM server successfully.
What am I doing wrong or what I have to do?
This is almost certainly an issue of either permissions for the path you're trying to access, or the mode in which you are running Apache. If, in your httpd.conf or ssl.conf files, you have a directive like SSLRequireSSL for this path, it will show a forbidden message when you attempt to access it via http rather than https.
Another reason this can happen is if you have http basic auth set up or some such, and cancel the login process.
Probably the most likely reason though, is simply having too strict permissions set on the folder or files that Apache is attempting to serve. If you go to the path where index.html lives, and make sure that both the directory and the files you want, are set to chmod 644 and make sure that you set things as being executable if there are scripts to be run, then you should be able to serve via apache as expected. You may also then need to chown apache.apache the files in question if they need to be writable by apache as well, but the former should get you going at least.
EDIT: Fixed a typo.

Basic Apache commands for a local Windows machine

I have installed XAMPP on my Windows 7 machine and created a number of virtual servers. This part is straightforward enough.
Each time I add a new virtual server I am having to reboot my computer in order to reboot the Apache server, which is of course quite time consuming.
I have googled and found the correct console commands to use to reboot Apache, but absolutely non of the references that I have found actually tell you where to type the relevant commands! A certain level of knowledge is assumed.
So my question is - where do I actually type apache -k restart?
Going back to absolute basics here. The answers on this page and a little googling have brought me to the following resolution to my issue.
Steps to restart the apache service with Xampp installed:-
Click the start button and type CMD (if on Windows Vista or later and Apache is installed as a service make sure this is an elevated command prompt)
In the command window that appears type cd C:\xampp\apache\bin (the default installation path for Xampp)
Then type httpd -k restart
I hope that this is of use to others just starting out with running a local Apache server.
For frequent uses of this command I found it easy to add the location of C:\xampp\apache\bin to the PATH. Use whatever directory you have this installed in.
Then you can run from any directory in command line:
httpd -k restart
The answer above that suggests httpd -k -restart is actually a typo. You can see the commands by running httpd /?