I want to turn my localhost server into a real website - express

I created an application that runs on a localhost server using expressjs. And I also bought a domain.
I'm wondering if there is a way to take that localhost server and turn it into a real shared server
I tried once to use a hosting service like hostgator but I still don't know how I can turn the express app into a real website.
I have no experience with any web development services so please don't tell me to use ....... whatever because I will have no idea what that is.

For one thing it is not clear how your website actually works: if it is only express does it generate HTML or is it purely JSON passed to browser clients via get requests (to each their own).
There are so many options as to how you might do this: one of the best options is to first make sure your server runs on Docker. Find a tutorial on YouTube/google/Stack Overflow/Blogs on how to run your Express server with docker. If you do that you can deploy it to a Container manager like Google/Amazon/Digital Ocean. If this seems hard to you there are other options.
Presumably you run your server with something like npm start. This guide can show you how to do essentially that but on a cloud computer.
Before you begin make sure that you're locally working server is checked in to a cloud Git provider like Github, GitLab, Bitbucket, etc.
Since Amazon AWS, and Google Cloud have free tier or options for hosting for free for a certain amount of time (AWS 1 year) or for a certain amount of money (Google Cloud). These two seem like viable place to start.
If you find the option that you'd like you'll need to:
create an account
Create a server (choose a cheap one especially initially like mice/small/cheap etc).
Find a tutorial on how to "SSH" into that server (which basically means remotely control the terminal on that server). Google actually makes this fairly easy there's a big button that says SSH into this server.
Once you've logged into that Computer you'll be able to run the same commands you probably normally do on your home computer:
The computer you'll be getting is likely to be a virtual Linux Computer probably something like Linux Ubuntu. Find a tutorial on how to get git and node installed there (but it's something like sudo apt-get update && sudo apt-get install git node).
Once you have git and node try mkdir www and cd into that: mkdir www && cd project (This isn't critical but conventional.)
Copy the link that allows you to "Clone your repo using HTTPS" (there's a link at the top right of your GitHub (or others) repo that allows you to do that. You'll need to enter your password
Now all the files that you had on your computer are on this new computer.
Next you'll have to probably npm i to install your dependent NPM packages. (This assumes you properly used .gitignore to prevent GitHub from being filled with extra copies of your npm packages.)
Now you should be able to run your code as usual: npm run start
If all those steps work you'll want to get something that will run these "forever" like https://www.npmjs.com/package/forever npm i -g forever or even better: https://www.npmjs.com/package/pm2 will allow you to continuously run your express server.
Finally, you'll need to configure this server on AWS/Google/whatever service you're using to push traffic coming in on port 80 and 443 to port 3000 and open traffic to all. And depending on the service you chose that's different so find a tutorial for doing just that part.
This will only allow people across the internet to see your service on an AWS URL or a google URL. But it's a good chance to make sure everything works perfectly. Once you're happy with everything associate your purchased domain with that special AWS/Google domain. You can do that on the AWS side, or the GoDaddy/NameCheap/where-ever you bought your domain side.
For the docker option you can download aws-cli tools and upload your built docker container to AWS and have it available. Find a tutorial to do that.
Essentially your question is very broad so I sometimes brushed over some details, but this is essentially what you have to do.

Related

Apache2 Not Responding: Bitnami Magento Install (Legacy)

For reasons too insane to even go into, I am attempting to install using the Bitnami Magento 1.9.2.4 image on a fresh Amazon AWS/Lightsail Ubuntu 16.04 instance (2gbs to avoid complaints and be sure I don't run into anything unnecessary).
I think this is really more of an Apache question. After I finish the install (success), I can't get the server to respond via the instance IP address at the default port (8080).
Regarding the old Bitnami Image, you can get (or wget) that Magento 1.9.2.4 image still, it's over here:
wget "https://downloads.bitnami.com/files/stacks/magento/1.9.2.4-3/bitnami-magento-1.9.2.4-3-linux-x64-installer.run"
So for the sake of anyone who's trying to work through the whole process, once you pull the above down to your instance you need to chmod the above file to 755. This assumes you are in the directory with your download:
chmod 755 bitnami-magento-1.9.2.4-3-linux-x64-installer.run
Then run it using it's full path, like:
/home/ubuntu/bitnami-magento-1.9.2.4-3-linux-x64-installer.run
So the install is going to ask a bunch of questions, for anyone keeping track my answers were all yes (ie. yes to Git, PhpMyAdmin, Beetailer... whatever that is).
Then I created an admin user / password etc.
As far as the port I didn't have anything running on 8080 so the install defaulted the port to 8080 with HTTPS on 8443 with MySQL on 3306 (more on ports in a minute).
I think Host/Domain is one of the keys to this problem. When I couldn't get the server to respond I just recreated an instance and tried a different Domain during the install process. I tried: internal AWS IP, External ACTUAL IP, 127.0.0.1
Here's what the Magento 1.9 Domain prompt looks like:
So basically that sort of brings us up to date.
Once I finished the install, like a normal human used to using bitnami as a cloud image I assumed the server would respond at whatever the default path was at the IP address it was running on. Ie:
BASEIPADDRESS:8080/magento
Not the case. When I hit that the server does NOT respond, hence the question. In addition to the above I have also tried the BASEIPADDRESS, and the BASEIPADDRESS:8080
Results checking open ports
So since the server is not responding I figured I would check the ports.
First I checked using netstat:
netstat -lntu
I got back:
Then I realized that netstat is now depreciated... so I went with:
ss -lntu
I got back:
(excuse the images, formatting wouldn't work for text)
To me it looks like 8080 (default) is open in both of those results. So why isn't the server responding at the default location?
#Bitnami Status = OK
Checking the status with:
/home/ubuntu/magento-1.9.2.4-3/ctlscript.sh status
Everything looks good:
apache already running
mysql already running
Memcached not running
Since it says Memcached was not running, I started memcached to see if that was the issue, no it was not.
I can access the instance via SSH and yes I am sure the IP is right. See images above.
I also posted this to the Bitnami community but haven't heard anything over there. Will cross populate as I get ideas.
It looks to me that you configured Magento using the private IP address, so you would not be able to access from your browser. A way to check it is by executing the following command in your machine:
curl -L 127.0.0.1:8080/magento
If that provides output, then the IP is misconfigured, so you would need to reinstall using the proper IP
So this ended up being PRIMARILY due to not running the Bitnami stack installer as root / sudo:
sudo /home/ubuntu/bitnami-magento-1.9.2.4-3-linux-x64-installer.run
Why Install with Sudo on AWS/Lightsail?
So the reason you need to install as sudo has to do with the fact that when run as the normal user (ie. not root) the installer defaults to port 8080 which is NOT open on aws by default. To complicate matters further you may not be able to get things running properly even if you manually swap to port 80 AFTER you run the installer.
To avoid a scenario where port 80 requires root access to utilize I just re-created my instance and ran the installer as root with the above command.
Host Setting
During install I selected the public IP for the "Host" prompt and everything worked as I thought it might (straight out of the box).
Thanks to Javier Salermon who put me on the right track and the devs at Bitnami for cueing me into the fact that 8080 is not open by default.

Restart Kubernetes API server with different options

I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.

Move files from node.js to apache server

Is there any possible for moving files from platform like Heroku to second server like Apache? I want to create application and push it to Heroku, but I have also Apache server and I want send to this server all images which I upload from frontend forms.
EDIT
ok so You want to use Heroku as the main application server. But then Heroku makes HTTP/HTTPS requests to the Apache server?
I think thats what you are asking.
Yeah no issues with that. if you want to set up API id go with Laravel 5.3 and use its passport function on your Apache server ( but this is not required, this is only for security.)
https://laravel.com/docs/5.3/passport
END OF EDIT
what the web server is really does not matter at all.
Its about if the server has internet access and im pretty sure they do.
Easiest way is ftp,SSH,or git. unless they both have a web server then id just zip up your app without the node_modules folder and move it into the web directory. then go to the address of your server in a web browser. eg. http://mywebsite.com/files-i-just-zipped.zip and download them. (i normally log onto the new server via SSH and do wget http://mywebsite.com/files-i-just-zipped.zip)
I need more info but yeah. once you unzip then you need to install node but my guess is they already have it.
then do npm install in the package.json directory.

Why using NGINX or how to deploy Meteor app correctly?

I am going to finish my Meteor app in a few weeks. So the problem that I will face - how to make my app available to other people.
Firstly I bought a droplet on Digital Ocean. And started to read about the ways to deploy meteor app to production server.
I found 2 totally different ways to do that!
The first one is pretty simple (and so I really love it). Here is the link. I have to do a few steps - create a droplet with Ubuntu 14.04, then connect to this droplet via ssh, then install and run mup. After that anybody can access to my app. I worry, that there is no ssl support (my project is e-commerce, so I really need https-connection), but then I found in mup docs a short article How to set up SSL with Mup. So everything is perfect at first glance.
But then I found another way to deploy meteor app. Here is link. It is much more complicated. First I need to install node and mongo on my droplet. Then install and configure nginx. And then after many steps comes Meteor installation. Author don't explain why people need do deploy app this way, assuming that it is obviously to everyone. His explanation is "The problem with this is that it isn’t wise to run an application like Meteor through your public port (which is 80)".
I admit I have no experience and knowledge in such questions. The one thing that I can say exactly is that I need a really proper way to deploy e-commerce meteor app. And it doesn't matter I won't sleep many hours by doing this.
So question is: which one way is proper? And (it is important) why?
Either security and performance are important for this project. I am also going to use prerender.io or spiderable (for seo purposes) and fast render, if it can have an influence on your answers. and really thank you for answers guys!
You can deploy your Meteor App on server via different mechanism . There are lots of way to do the same thing.
Like as you said you also found two ways to do that.
So in first link you used Meteor up for deployment your application as you successfully deployed .
In second approach you need to first login to the server and than create user than install everything needed to your server machine after that you need to setup Nginx.
So as i guess your question is related to "Nginx" . And you want to know
1)Why we need to use Nginx
2)Which one is the better approach
So answer for your first question is as follows:-
Nginx (pronounced "engine x") is a web server that is used for many purpose mainly use for proxy pass. Means using nginx you can redirect your request from one url to another and the actual url is hidden from the UI (For securety purpose and for redirection).
Like in meteor your app is by default running on 3000 so one way is that you can open 3000 port and run your application on that port. But via nginx you can run your app on 80 port and as user hit any event than in nginx you can configure address where you want to send your request.
Like you can send them to 3000 port.
So now user don't know in actual where is your request going on because you show them port 80 but in actual your request is go to 3000 port. So this is the one advantage of using nginx same there are lots more.
So for configuration of nginx you just need to install nginx if you are using ubuntu than via simple command-:
sudo apt-get install nginx
then setting in nginx configuration file that is under the following directory:-
/etc/nginx/sites-enabled/default
just open this file and setup up your configuration here like:-
server {
listen 80;
server_name localhost;
root /home/parveen/meteor/app;
location / {
index /index.html;
}
location /api {
proxy_pass http://localhost:3000;
}
}
In this way you can configure your nginx setting as you want please read nginx documentation for detail.
After that you need to start your server using forever or nohup which you want to use so that your server will not stop as you exit from the login of server.
Conclusion:-
In the second approach you need to install everything by yourself via ssh login to your server than configuration of nginx and and then run your server.
If you do any changes than again you need to update your changes to server and then stop meteor app then restart that. But this is more secure approach and you can do what you want to do.
In first approach they are using mup (Meteor up) that do so many of works for you . You just need to do some configuration you can use Docker or as define in the blog (droplet) link you shared and just need to run meteor up command and that will first create a bundle for your app than run that so in the first approach if you do any changes than you not need to login again to your server update changes , what you need to do is just run again the same command and that will create new bundle with updates and run your project. But i don't think that is more secure.
So its depend on your requirement and choice which you want to use.
If you have any question than most welcome.
Hope this would help!
Thanks

Installing Apache on Windows for manual use and multiple users

I installed Apache HTTP Server on our Windows system, to work on a home project; it's for use by "localhost" only. When I installed it, the two options were to install it as a service, for all users, using port 80; or to install it for just the current user, run manually, using port 8080. I selected the second. However, while I'd prefer for it to use port 8080 and be run manually, I'd like it to be set up so that my wife can run it as her user. (Allowing all users would be OK.) I don't see an httpd.conf entry for this. Is there a way to do this either through httpd.conf or a command-line option? I'm guessing I could do this in the registry but I don't want to mess with it if I don't have to. (P.S. There's no need to have multiple instances run simultaneously.)
There's nothing you can do from within httpd.conf; any settings in there affects the server itself and not how it is accessed by a program
Well, you have a few options:
1. Uninstall the software and re-install it choosing the all users option. That would be your best choice.
2. Found the location of the folder where it was installed (or where apache.exe is located as that is the needed file to run) and see if you can create a shortcut link into it from within your wife's account. Apache server doesn't care who runs it as long as that file can be executed. The problem you might face is Windows OS preventing you from running it, especially if it requires administrative rights.
3. Install a software such as WAMPServer for her. Of course, that means two similar software on the same machine.
If I have to do it, I would go the first route. Every other option is gonna be a little complicated to work with.
Hope the explanation is clear and the answer helps.