I created an application that runs on a localhost server using expressjs. And I also bought a domain.
I'm wondering if there is a way to take that localhost server and turn it into a real shared server
I tried once to use a hosting service like hostgator but I still don't know how I can turn the express app into a real website.
I have no experience with any web development services so please don't tell me to use ....... whatever because I will have no idea what that is.
For one thing it is not clear how your website actually works: if it is only express does it generate HTML or is it purely JSON passed to browser clients via get requests (to each their own).
There are so many options as to how you might do this: one of the best options is to first make sure your server runs on Docker. Find a tutorial on YouTube/google/Stack Overflow/Blogs on how to run your Express server with docker. If you do that you can deploy it to a Container manager like Google/Amazon/Digital Ocean. If this seems hard to you there are other options.
Presumably you run your server with something like npm start. This guide can show you how to do essentially that but on a cloud computer.
Before you begin make sure that you're locally working server is checked in to a cloud Git provider like Github, GitLab, Bitbucket, etc.
Since Amazon AWS, and Google Cloud have free tier or options for hosting for free for a certain amount of time (AWS 1 year) or for a certain amount of money (Google Cloud). These two seem like viable place to start.
If you find the option that you'd like you'll need to:
create an account
Create a server (choose a cheap one especially initially like mice/small/cheap etc).
Find a tutorial on how to "SSH" into that server (which basically means remotely control the terminal on that server). Google actually makes this fairly easy there's a big button that says SSH into this server.
Once you've logged into that Computer you'll be able to run the same commands you probably normally do on your home computer:
The computer you'll be getting is likely to be a virtual Linux Computer probably something like Linux Ubuntu. Find a tutorial on how to get git and node installed there (but it's something like sudo apt-get update && sudo apt-get install git node).
Once you have git and node try mkdir www and cd into that: mkdir www && cd project (This isn't critical but conventional.)
Copy the link that allows you to "Clone your repo using HTTPS" (there's a link at the top right of your GitHub (or others) repo that allows you to do that. You'll need to enter your password
Now all the files that you had on your computer are on this new computer.
Next you'll have to probably npm i to install your dependent NPM packages. (This assumes you properly used .gitignore to prevent GitHub from being filled with extra copies of your npm packages.)
Now you should be able to run your code as usual: npm run start
If all those steps work you'll want to get something that will run these "forever" like https://www.npmjs.com/package/forever npm i -g forever or even better: https://www.npmjs.com/package/pm2 will allow you to continuously run your express server.
Finally, you'll need to configure this server on AWS/Google/whatever service you're using to push traffic coming in on port 80 and 443 to port 3000 and open traffic to all. And depending on the service you chose that's different so find a tutorial for doing just that part.
This will only allow people across the internet to see your service on an AWS URL or a google URL. But it's a good chance to make sure everything works perfectly. Once you're happy with everything associate your purchased domain with that special AWS/Google domain. You can do that on the AWS side, or the GoDaddy/NameCheap/where-ever you bought your domain side.
For the docker option you can download aws-cli tools and upload your built docker container to AWS and have it available. Find a tutorial to do that.
Essentially your question is very broad so I sometimes brushed over some details, but this is essentially what you have to do.
Scenario:
I'm using ssh to connect to a remote machine. I use the command line and run ssh <pathname>, which connects me to the machine at . I want to edit and run code on that remote machine. So far the only way I know is to create, edit, and run the files in the command window in vi, because my only connection to that machine is that command window.
My Question is:
I'd love to be able to edit my code in VSCode on my own machine, and then use the command line to save that file to the remote machine. Does anyone know if this is possible? I'm using OS X and ssh'ing into a Linux Fedora machine.
Thanks!
Sounds like you're looking for a command like scp. SCP stands for secure copy protocol, and it builds on top of SSH to copy files from one machine to another. So to upload your code to your server, all you'd have to do is do
scp path/to/source.file username#host:path/to/destination.file
EDIT: As #Pam Stums mentioned in a comment below the question, rsync is also a valid solution, and is definitely less tedious if you would like to automatically sync client and server directories.
You could export the directory on the remote machine using nfs or samba and mount it as a share on your local machine and then edit the files locally.
If you're happy using vim, check out netrw (it comes with most vim distributions; :help netrw for details) to let you use macvim locally to edit the remote files.
I installed youtube-dl on my local machine using curl as mentioned in the official README here.
sudo curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl
sudo chmod a+rx /usr/local/bin/youtube-dl
Now when I run below command on my local machine
youtube-dl --cookies cookie.txt https://www.youtube.com/watch?v=x-5V_RS3Q48 -u my_account#gmail.com -p my_pass_word
I am able to download the video without any hassle as shown below.
But when I try to download the same video on one of my ec2 instances, I get the following exception.
The installation procedure on both machine is exactly same, youtube-dl version is exactly same (2017.08.18), python version is same (2.7.6)
The only difference I could figure out is the kernel versions on both machines:
On My Local: Linux-3.19.0-25-generic-x86_64-with-Ubuntu-14.04-trusty
On EC2 Instance: Linux-3.13.0-74-generic-x86_64-with-Ubuntu-14.04-trusty
Also the video I am trying to download is private and uploaded by same user I am providing credentials of.
One important point to note is that EC2 machine is able to download the video without any trouble if I am not using the username & password (which is only possible for videos that are not private)
Thanks
Posting the answer in case someone else is stuck with the similar issue.
The Issue with the cookies not being generated on server edition Linux OS on ec2 instance provided by AWS.
According to what I learnt recently, we don't have support for firefox browser on these machines (at least by default) and that's why it was failing to create the cookie file.
Solution
I created the file locally and set expiry time to 20 years in future and moved that cookie on server ec2 instance and used that cookie to sign in rather than creating one.
Thanks
I set up an OpenShift application and set up my local PuTTY to connect to the server via SSH. Everything works fine, but I don't know how to run a few commands (mainly alias) after I connected to the server automatically (I don't want to copy&paste the same commands everytime I connect).
On my local linux shell I can use .bashrc, but this doesn't seem to work on OpenShift. I can't write a file in my home directory (/var/lib/openshift/[some letters and numbers]/) and I don't know the right place to put this file. Does anybody know where I have to put a file which will be run everytime I login?
I'd prefer a solution which doesn't involve my local SSH software as I'm connecting to this OpenShift application from different machines.
You can use your .bash_profile located in your $OPENSHIFT_DATA_DIR.
This has to be done in .bashrc or .profile or .bash_profile files. As you say they don't work then you can have a script in a file, scp that file to the remote server and then run when you ssh in a single command.
I have not used openshift but have used aws ec2 instances alot with ruby scripts,
ssh ubuntu#ec2-address ruby basic-auto.rb
The above command excutes the ruby file after the ssh. You can have a script in any language or may be a bash file(.sh) which executes after ssh.
I'm trying to edit files on a remote Amazon EC2 Linux instance. I'm currently just sshing in and using nano, but would really like a graphical text editor. I have two problems:
I have to use sudo to edit these server files when I ssh in.
I can only login with the key Amazon gave me. Ex: ssh -i Andrew.pem ec2-user#55.55.44.33
Please help! I'm not picky, just any graphical text editor since using nano is a huge pain.
For remote editing, there are lots of options here: This answer, like any other, is sure to become outdated as more options enter the field.
For vim, the netrw module meets this need, and is shipped with the editor by default.
For emacs, this is available with TRAMP.
For the ATOM editor, see the remote-files plugin.
For IntelliJ, editing files on remote hosts is supported in the commercial edition.
For Eclipse, see the Remote System Explorer from the Target Management project.
I'd suggest starting with the editor you prefer and evaluating options from there. If you set up your SSH session to be able to authenticate directly to root (password auth is best disabled for root, but if you have sudo you can install RSA keys), then you'll be able to specify root as a target user for any of the above.
By contrast, if you really do need sudo, you still have options:
See Using tramp to open files sudoed to root on the Emacs wiki. New versions also support a ssh+sudo transport, meaning this wiki entry may already be out-of-date.
To help anyone that just need a quick command line text editor:
you can use vi:
vi file-name.txt
or nano:
nano file-name.txt
optionally use sudo if editing the file, eg:
sudo nano file-name.txt
Just modify the appropriate files on your local machine and scp the file into the remote machine.
scp <local_machine_path_to_file> remoteUser#remoteHostName:<filePath>
amazon now acquire Cloud9, which is a browser-based IDE that can edit your EC2.
https://aws.amazon.com/cloud9/
Today I found two products that can use sudo, they are
MobaXterm (free version) and SmarTTY
MobaXterm has a button in the file browser that enables sudo mode. You can view, create and edit files as a sudo user. Use this switch when necessary.
Unfortunately, this only works through the SCP protocol.
SmartTTY works differently. When you try to save a file that requires sudo, SmarTTY throws an error and immediately suggests trying to save the file with sudo
Of the two products, I recommend MobaXterm.
Sudo is for root privileges for that particular command. You will need to use root privileges to edit system files. Even on a local machine. If you don't like typing sudo every time, you can type sudo -s. You will change to root user and it will show you in terminal i.e. root#ip.... The $ sign will also change to #. Honestly, I prefer not going root, because it is easier to make irreversible mistakes with root privileges. I've made some mistakes and I'm talking from experience...
As far as the second part of your question goes, you can configure various text editors to sftp into your instance such as sublime.
You will have to use the .pem key file every time you ssh using terminal. This is because AWS takes security very seriously. You can put the key file in your home directory. That way you don't have to change directories every time you open up terminal.
You can also edit a local copy of files and then use FileZilla to transfer. Setting up FileZilla to work with your EC2 instance is straightforward. You can give vim a try since it colors your code and is more advanced than nano. Use the command vi or vim from terminal.
Happy SSH'ing ;).
ssh -X user#server.
You have to make appropriate setting for forwarding.
I use SFTP Net Drive SFTP, which let you create a virtual drive on your local computer that will map the remote file system accessible via SFTP protocol. After the map is created, you can use the editor of your preference.
You can use nano, vim, vi or many others. However if you want to edit with a graphical text editor you will have to create SFTP since Amazon does not support FTP. One way is to use filezilla to upload your files. Here is a video on using filezilla https://www.youtube.com/watch?v=VawBMj29g0o I suggest SSH though. Its fast and easy here is a video on that https://www.youtube.com/watch?v=O2-3HoRjBH4
I found a weird workaround for a GUI based text editor on AWS, I used Jupyter Notebook. If you have Anaconda installed on you instance, you follow the following steps
ssh onto your instance using ssh -i <location of your private key> <username>#<public DNS>
Start jupyter notebook on your instance using jupyter notebook --no-browser --port=8888
Open a new terminal window and ssh onto your jupyter notebook using ssh -i <location of your private key> -L 8212:localhost:8888 <username>#<public DNS>
Now you can open jupyter notebook at localhost:8212
Using the jupyter notebook environment, you can not only launch and run Ipython notebooks but also create and edit any files like a text editor.
would really like a graphical text editor
You cannot have a graphic editor, you need to use any editors like nano as you said or vim,emacs. Sudo would be required when you have to edit configuration files with root as owner.
To assist others with this same question, I would suggest jEdit. It is very capable, and it has a very rich plugin environment, language parsing, etc.
http://www.jedit.org
It has "always" supported sftp read and write of files with the sshConsole plugin.
I use it now on my AWS EC2 instance with the key pair supplied by AWS.
Lastly, it is not a good idea to edit files owned by root in the "production" environment.
Do your dev work in the AWS user's home folder so that you have full control of the source files. Then use a symlink to the actual server's file tree so you can serve it to yourself for testing. There are lots of controls in nginx and apache to limit who can view your dev site.
EDIT/UPDATE:
The NppFtp plugin to Notepad++ profides sftp access to AWS. I just tested it with the .pem file that they provided for my login at AWS.
For this, i'd suggest one of:
Learn and use emacs; it's quite powerful as far as textmode editors go.
Install your favourite graphical editor on the server and use X forwarding, 'ssh -X server.com'. This will allow you to launch the editor remotely, but have it display locally.
Most elegant in my opinion, use sshfs (https://github.com/libfuse/sshfs) to mount the remote directory locally, so you can work on the files directly using your favourite text editor.