Kill localhost:3000 process from Windows command line - ruby-on-rails-3

So im using ruby on rails in windows (i hear you all spitting your coffee onto the screen), its only a short term thing. (using ubuntu at home) So i tried to fire up webrick this afternoon and i get the error message
TCPServer Error, only one usage of each socket address is normally permitted
So it seems as if port 3000 is still running from last week? My question is how do i kill the process from the Windows command line. normally i have to press ctrl and pause/break in windows as ctrl c is not working which is only killing the batch process it seems..
Any solutions welcomed
Edit
So it seems as if
tasklist
will give me the list of processes, but where do i find the process for running the webrick server?
ruby.exe is not listed as a running process

Try using netstat -a -o -n to determine the pid of the process running on port 3000. Then you should be able to use taskkill /pid #### to kill whatever process is running on that port.
Probably not the most graceful way to do it, but I think it should work.
EDIT
You'll probably have to also use the /F flag to force-kill the process. I just tried it on my local machine, and that worked fine.

Go into rails_project\tmp\pids and delete the .pid file in there.
run:
rails server

Related

Ubuntu Server - Not able to run server in background

I am running server for Rimworld on RapsberyPi 4B.
I have problem with running it in background, when i start the server:
./Open\ World\ Server
Everything starts but name of CMD is the name of the server and when i close the CMD window server will stop.
I´ve tried many things like & after the command, nohup and others. Also I´ve tried pm2 as it is running my discord bot, but everything that I´ve tried is stil saying that Open World Server is "Stopped".
So what i need is:
Run this server in background
Start this server after restart automatically.
Thanks everyone for help :)
So I have found the solution.
I´ve tried pm2 many times but never this way.
nano
#!/bin/bash
Command you want to be executed. In my case
./Open\ World\ Server
Save this like run.sh or whatever.
After that simply do this
pm2 start run.sh
And that is all you need. You can do the same way with minecraft server or other things you want to run in background and start with reboot of your Pi or other devices.

Rundeck - reboot server job

I have a rundeck job that reboots a server, it sends the command "sudo reboot". This works and the server is rebooting.
The problem is that rundeck doesn't get a signal back so the job fails.
Is there a way to make this work and get a complete signal back in rundeck?
Perhaps wrap your command in a script, background the reboot operation, and return 0? I'm doing something similar with a set of development VMs, but I'm using virsh. I don't see why this couldn't be done with a physical server:
#!/bin/bash
ssh rundeck#yourserver sudo reboot &
exit 0
You may need to experiment a bit with the ssh options (perhaps '-f' and/or '-n') to get this to work properly.
Well playing around now I just used as Local Command step:
ssh ${node.username}#${node.hostname} "reboot & exit"
The return code is ZERO and everybody is happy.

AWS process launched from SSH terminate when SSH hangs up

I use SSH to connect to my AWS EC2 instances and run code that takes a long time to complete. I find that if my local computer sleeps (or even if I leave it unattended for a bit) the SSH connection hangs up (which is not fatal in itself) but this seems to terminate the code on the EC2 instance that I launched using SSH.
Also, I use SSH to locally monitor the exception of my remote code, so even if there's a way to tell the remote process to stay alive after SSH has gone, I still need a way to locally see the output of the process as it continues to run (without SSH).
How do I keep code running on my AWS EC2 instance after SSH has hung up; how can I monitor the output of such a process?
When you close your tty (ssh close in your case) your process gets a SIGHUP and the default action on SIGHUP is to terminate. To avoid that you can use the command nohup to trap and not send the SIGHUP to your command, or trap the SIGHUP in your code and ignore it.
There are a bunch of ways to track a background process, but perhaps the easiest is to have it write to a file and in that other ssh you can read that file. If your process is really a command on the command line you can redirect its standard output and standard error to a file. When such a file keeps getting new content, it may be annoying to keep reading it to refresh, in which case the command "tail -f" handy.
Here is how you can config your ssh connection to stay alive :
vi ~/.ssh/config # on your client side
add this line to engage sending a "null packet" every 120 seconds :
ServerAliveInterval 120
If you own the server side do a similar change :
vi /etc/ssh/sshd_config
add these lines at bottom of config file
ClientAliveInterval 120
ClientAliveCountMax 720
this is for linux YMMV on other OS settings
Use screen
local> ssh ...
remote> screen
remote+screen> python long_running.py ...
You can then detach from screen and even disconnect from SSH, and when you return by SSHing back in again, you can
remote> screen -r
to reconnect to your running code.

What happens to a process in an EC2 instance when I get a 'Broken Pipe' error on ssh?

I am using some EC2 instances to run some large jobs I can not run locally. The issue I am seeing is that after a while (X hours since the process started) my connection on my shell gives me a broken pipe error
ubuntu#ip-10-122-xxx-xxx:~/stratto/ode$ Write failed: Broken pipe
The instance is still there because I can reconnect with no problems, but how can I reconnect and get back at seeing the logs of the process as before the 'Broken Pipe'
Any tip much appreciated,
Thanks!
Redirect your output to a file and then run the program "nohup ..." to ensure the disconnect doesn't kill it. Use "tail -f" to monitor the redirected file.
Note: Originally said to use "tee" but that won't work. I think a straight redirect and then tail on the file works.
You can use screen to run processes in the cloud even when you are not connected to the server.
sudo apt install screen
To specifically address the issue described in the original post (e.g. connecting to AWS EC2 instances) I a basic example and a more advanced example of using screen.
You can use "screen". Detach from it and ping to google.com. So there ssh session will be active through out the installation.

Tornado stopped running on AWS immediately after I terminate my remote session

I'm using SSH to remotely launch Tornado on Amazon Web Service. It works fine when I launch it by:
python startTornado.py
However, after my SSH session times out or terminated, the Tornado server is also stopped immediately, so I can't access the webpage anymore. I did quite some search but couldn't find an answer on Google.
How can I keep Tornado and the site running after my SSH session terminated?
The process will shut down when you logout if it's running in the foreground or if it tries to write to stdout and the terminal it's outputting to no longer exists. Try starting the server with
nohup python startTornado.py &
The nohup command redirects output to a file, and the & at the end runs the command in the background. Alternatively, you can use the screen utility which allows you to detach a terminal and reattach it in a different ssh session (see the screen man page for details).
While all the above solutions solve the immediate problem, what you might really need to run such processes in production, control them (start/restart/stop) is supervisor. It is python based and its more useful when you have to run multiple instances of tornado behind nginx.
In addition to nohup as Kevin has mentioned, you can also use disown command if you are using bash:
disown <job-id>