ssh timing out on second login - ruby-on-rails-5

I'm trying to set up Rails on my site via ssh. When everything is set up, I start the server with rails server and I get:
=> Rails 5.0.1 application starting in development on http://localhost:3000
=> Run rails server -h for more startup options
Puma starting in single mode...
* Version 3.6.2 (ruby 2.3.3-p222), codename: Sleepy Sunday Serenity
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://localhost:3000
Use Ctrl-C to stop
It would be fine if the issue didn't stop there, but when I point my browser at the IP address on port 3000, my browser just hangs instead of displaying the Rails smoke page.
Since I can't type in more commands, I open a new terminal and log in via:
ssh -i /path/to/cloud.key user_name#XXX.XXX.XXX.XXX
I think I've seen it work before, but now it's timing out:
ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out
I found similar problems resolved on stackoverflow, but none of them solved the problem for public key authentication and when I try their solutions (ssh user_name#XXX.XXX.XXX.XXX), I turn up Permission denied (publickey).
So, I want to either learn why my browser is hanging (if I need to install nginx or apache2 or configure puma, etc.), and/or why my attempts to log into a second ssh session are failing.
Any help for this one?
Ubuntu Server 16.04
Rails 5.0.1
Ruby 2.3.0p0

You can't point your browser to the ip because the server is binding to localhost. You must enter rails s -b0.0.0.0

Related

Connecting erlang observer to remote machine via public IP

Background
I have a machine in production running an elixir application (no access to iex, only to erl) and I am tasked with running an analysis on why we are consuming so much CPU. The idea here would be to launch observer, check the processes tab and see the processes with the most reductions.
How am I connecting?
To connect I am following a tutorial from a blog:
https://sgeos.github.io/elixir/erlang/observer/2016/09/16/elixir_erlang_running_otp_observer_remotely.html 1
Their instructions are as follows:
launch the app in the production machine with a cookie and a name
from local run: ssh user#public_ip "epmd -names" to get the name of the app and the port used
from local create a ssh tunnel to the remote machine: ssh -L 4369:user#public_ip:4369 -L 42877:user#public_ip:42877 user#public_ip (4369 is the epmd port by default, 42877 is the port of the app)
from local connect to the remote machine using the node's name: erl -name "user#app_name" -setcookie "mah_cookie" -hidden -run observer
Problem
And now in theory I should be able to use observer on the machine. Instead however I am greeted with the following error:
Protocol ‘inet_tcp’: register/listen error: epmd_close
So, after scouring the dark side of internet, I decided to use sudo journalctl -f to check all the logs of the machine and I found this:
channel 3: open failed: administratively prohibited: open failed
my_app_name sshd[8917]: error: connect_to flame#99.999.99.999: unknown host (Name or service not known)
/scripts/watchdog.sh")
my_app_name CRON[9985]: pam_unix(cron:session): session closed for user flame
Where:
erlang -name: my_app_name
machine user: flame
machine public ip: 99.999.99.999 (obviously not real)
so it tells me, unknown host ?? I am confused since 99.999.99.999 is the public IP of the machine itself!
Questions
What am I doing wrong?
I read that in older versions of erlang I can’t monitor a machine with observer if they are in different networks (which is the case, because I want to monitor this machine from my localhost) but I didn’t find any information regarding this in modern days.
If this is in fact impossible, what alternatives do I have?
Solution
After 3 days of non-stop searching, I finally found something that works.
To summarize I am putting it here everything I did.
All steps in local machine:
get the ports from the remote server:
> ssh remote-user#remote-ip "epmd -names"
epmd: up and running on port 4369 with data:
name super_duper_app at port 43175
create a ssh tunel with the ports:
ssh remote-user#remote-ip -L4369:localhost:4369 -L43175:localhost:43175
On another terminal in your local machine, run a iex terminal with the cookie the app in your remote server is using. Then connect to it and start observer:
iex --name observer#127.0.0.1 --cookie super_duper_cookie
Node.connect :"super_duper_app#127.0.0.1"
> true
:observer.start
With observer started, select the machine from the Nodes menu.
Possible setbacks
If you have tried this and it didn't work there are a few things you can check for:
Check if the EPMD port on your local machine is free, if not, kill the process using it and free it.
Check your ssh tunneling keys and configurations for permissions. As #Roberto Aloi pointed out this link can be useful: https://unix.stackexchange.com/questions/14160/ssh-tunneling-error-channel-1-open-failed-administratively-prohibited-open

AEROSPIKE_ERR_CONNECTION Bad file descriptor, 127.0.0.1:3000; Not able to connect to local node from aql

I have installed aerospike on my mac my following this installation steps
All the validations are working fine. I am able to connect to the cluster using browser chrome. Below is the screen shot.
I have also installed the AQL tools following the instructions here.
But I'm unable to connect to local node from aql.
$ aql
2017-11-21 16:06:09 WARN Failed to connect to seed 127.0.0.1 3000.
AEROSPIKE_ERR_CONNECTION Bad file descriptor, 127.0.0.1:3000
Error -1: Failed to connect
$ asadm
Aerospike Interactive Shell, version 0.1.11
ERROR: Not able to connect any cluster.
Also, I have noticed the Java client is giving error.
AerospikeClient client = new AerospikeClient("localhost", 3000);
when I changed the localhost to actual Ip returned by vagrant ssh -c "ip addr"|grep 'global eth1' it is working fine.
How to connect with aql using customer parameters? I want to pass ip address and port as parameters to aql. Any suggestions.
$ aql --help
https://www.aerospike.com/docs/tools/aql/index.html - discusses all various command line options.
$ aql -h a.b.c.d -p 1234
There is another possibility, you have your owned port instead of the default 3000, so when you try to connect to aerospike, you can try to run command like : aql -p4000
Hope this may help you
Seems like the port is not getting freed even after exiting the vagrant console.
Tried closing all the terminal windows and then starting again. But no luck.
Finally, restarting the system resolved the issue.

Ambari cluster : Host registration failed

I am setting up an ambari cluster with 3 virtualbox VMs running Ubuntu 16.04LTS.
I followed this hortonworks tutorial.
However when I am going to create a cluster using Ambari Cluster Install Wizard I get the below error during the step 3 - "Confirm Hosts".
26 Jun 2017 16:41:11,553 WARN [Thread-34] BSRunner:292 - Bootstrap process timed out. It will be destroyed.
26 Jun 2017 16:41:11,554 INFO [Thread-34] BSRunner:309 - Script log Mesg
INFO:root:BootStrapping hosts ['thanuja.ambari-agent1.com', 'thanuja.ambari-agent2.com'] using /usr/lib/python2.6/site-packages/ambari_server cluster primary OS: ubuntu16 with user 'thanuja'with ssh Port '22' sshKey File /var/run/ambari-server/bootstrap/5/sshKey password File null using tmp dir /var/run/ambari-server/bootstrap/5 ambari: thanuja.ambari-server.com; server_port: 8080; ambari version: 2.5.0.3; user_run_as: root
INFO:root:Executing parallel bootstrap
Bootstrap process timed out. It was destroyed.
I have read number of posts saying that this is related to not enabling Password-less SSH to the hosts. But I can ssh to the hosts without password from the server.
I am running ambari as non-root user with root privileges.
This post helped me.
I modified the users in host machines so that they can execute sudo commands without password using visudo command.
Please post if you have any alternative answers.

Syncing folder with Windows guest in Vagrant and using with IIS

I'm running a Windows 2012 R2 eval box (mwrock/Windows2012R2) on Mac OS Sierra host.
I'm trying to setup IIS to run a web site from the synced folder, but I keep getting an HTTP Error 500.19 - Internal Server Error.
After researching, I found out that it seems to be related to permissions. I tried every possible combination of permissions, granting to Users, IIS_IUSRS, vagrant, etc., changing the app pool user, etc., nothing got it working.
Then I figured I could just use rsync. So I tried to change my Vagrantfile to use type: rsync, and then got an error saying rsync was not found in the PATH.
No problem, I installed rsync using chocolatey and tried again. This time, I got an SSH error: Error: ssh_exchange_identification: Connection closed by remote host. I figured there probably wasn't anything setup for SSH on Windows guest, so I found this post on setting up OpenSSH .
I followed the instructions and tried again. rsync still wouldn't work, but now there's no error, it just stalls. I tested just doing a regular SSH to the windows guest and that works fine. However, while doing plain old ssh works, doing vagrant ssh does not.
I get the following error after entering password: ssh_dispatch_run_fatal: Connection to 127.0.0.1 port 2222: incomplete message. Meanwhile, doing ssh vagrant#127.0.0.1 -p 2222 works fine. So running out of ideas at this point.
Anybody manage to get this working with a setup similar to mine?

Docker login error with Get Started tutorial

I'm trying to follow beginner tutorial on Docker's website and I suffer with an error on login.
OS is Ubuntu 14.04, I'm not using VirtualBox and I'm not behind any proxy and want to push to the "regular" docker repository (not private one).
All threads I've found mention proxies and private repositories but that isn't my case, I'm just trying to do simple beginner tutorial.
Here is my attempt:
$ sudo docker login
[sudo] password for myuname:
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: myDHuname
Password:
Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
My docker info:
Containers: 5
Running: 0
Paused: 0
Stopped: 5
Images: 5
Server Version: 1.11.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 28
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.19.0-58-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.686 GiB
Name: myuname-ThinkPad-T420
ID: 6RW3:X3FC:T62N:CWKI:JQW5:YIPY:RAHO:ZHF4:DFZ6:ZL7X:JPOD:V7EC
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Epilogue
Now docker login is passing. I have not touched anything since yesterday when it was broken...
I can't reproduce the behavior anymore.
I encounter this issue when my first use docker. I've shadowsocks proxy on, and configed as pac mode. When I try to run docker run hello-world, I get this timeout error. When I set the proxy mode to global, the error is aslo there.
But when I disable the proxy, docker runs well. It pull remote image successfully.
docker for windows
Note: Some users reported problems connecting to Docker Hub on Docker
for Windows stable version. This would manifest as an error when
trying to run docker commands that pull images from Docker Hub that
are not already downloaded, such as a first time run of docker run
hello-world. If you encounter this, reset the DNS server to use the
Google DNS fixed address: 8.8.8.8. For more information, see
Networking issues in Troubleshooting.
The error Client.Timeout exceeded while awaiting headers indicates:
GET request to the registry https://registry-1.docker.io/v2/ timedout
The library responsible (most likely libcurl) timed out before a response was heard
The connection never formed (proxy/firewall gobbled it up)
If you see the below result you can rule out timed out and network connectivity
$ curl https://registry-1.docker.io/v2/
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
If you get the above response next would be to check if your user environment has some proxy configuration.
env | grep "proxy"
Note: The docker runs as root. Maybe you have http_proxy in your env. Most likely I am wrong. Anywho see what happens with the curl GET request
change the proxy settings in the firefox. May be you are in access restricted mode. Just add the server address in the firefox settings -> preferences -> advanced -> network -> configuration (settings). Add the server ip in the no proxy for the issue can be resolved