I have created a virtual machine using RedHat OpenShift virtualization, but I am unable to connect to it using SSH.
My virtual machine pod is running and I am able to able to access the pod terminal from the webconsole. I followed the instructions given in this page to create the ssh service.
[root#api.ocpcluster-m1.cp.mydomain.com ~]# oc get pods
NAME READY STATUS RESTARTS AGE
virt-launcher-fedora-clumsy-lamprey1-7rdgm 1/1 Running 0 22d
[root#api.ocpcluster-m1.cp.mydomain.com ~]# oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fedora-vm-ssh NodePort 172.30.84.81 <none> 22:31016/TCP 22d
[root#api.ocpcluster-m1.cp.mydomain.com ~]# ssh root#10.17.55.207 -p 31016
ssh: connect to host 10.17.55.207 port 31016: No route to host
[root#api.ocpcluster-m1.cp.mydomain.com ~]#
10.17.55.207 is the IP of one of the nodes.
What am I doing wrong here?
Related
Still getting the hang of Vagrant/Redis/Linux. Please help! The issue is I cannot connect to redis server running on VM.
Host: Macbook
Vagrantfile:
config.vm.box = "laravel/homestead"
config.vm.hostname="redis-test"
config.vm.network "forwarded_port", guest: 6379, host: 6379, id: "redis"
Guest: laravel/homestead Vagrant box.
/etc/redis/redis.conf
bind 0.0.0.0
After changing redis.conf, I also restarted the service
sudo /etc/init.d/redis-server restart
(AND also) sudo service redis-server restart
Also made sure ufw is disabled
sudo ufw disable
sudo ufw status
Status: inactive
If I run redis-cli -h redis-test ping, I get pong, and can access redis as usual (on the guest VM)
Now back on the host machine (macbook), I cannot access redis-server.
redis-cli -h redis-test ping
Could not connect to Redis at redis-test:6379: nodename nor servname
provided, or not known
Can someone help me connect to redis-server on vagrant box, please? Any help is greatly appreciated!
You forwarded redis port 6379 from host machine to redis-test VM, but host machine knows nothing about redis-test domain you are trying to connect to.
You can connect redis on redis-test VM from host machine in two ways:
1.
connect to localhost, because redis port is already forwarded to redis on redis-test VM:
redis-cli -h localhost ping
2.
add redis-test to /etc/hosts:
echo '127.0.0.1 redis-test' >> /etc/hosts
and you can connect redis the way you used:
redis-cli -h redis-test ping
I am new to trafodion and I am try to install traofdion on CDH 5.7 with python installer according to apache trafodion site.
[root#node1 python-installer]# ./db_install.py
**********************************
Trafodion Installation ToolKit
**********************************
Enter HDP/CDH web manager URL:port, (full URL, if no http/https prefix, default prefix is http://): http://10.1.1.10:7180
Enter HDP/CDH web manager user name [admin]:
Enter HDP/CDH web manager user password:
Confirm Enter HDP/CDH web manager user password:
***[ERROR]: Host [node1]: Failed to connect using ssh. Be sure:
1. Remote host's name and IP is configured correctly in /etc/hosts.
2. Remote host's sshd service is running.
3. Passwordless SSH is set if not using 'enable-pwd' option.
4. 'sshpass' tool is installed and ssh password is correct if using 'enable-pwd' option.
Also check these errors
1./etc/hosts and hostname is corrected
[root#node1 python-installer]# hostname
node1.trafodion.local
[root#node1 python-installer]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.1.1.10 node1.trafodion.local node1
10.1.1.11 node2.trafodion.local node2
10.1.1.12 node3.trafodion.local node3
10.1.1.13 node4.trafodion.local node4
2. sshd service is running
[root#node1 python-installer]# service sshd status
openssh-daemon (pid 3480) is running...
3. each two nodes could passwordless ssh
4.sshpass is not installed
[root#node1 python-installer]# sshpass
-bash: sshpass: command not found
Thank you for any help.
Daniel
Please try to ssh to node1 .. node4 manually using root user, it would be easier to detect any ssh login issues. If manually ssh could work, the installer should work too.
I am not able to access my container which is running a “dockerized” ipython notebook application. The host is a CentOS7 running in Google Cloud.
Here is the details of the environment:
Host: CentOS7/Apache Webserver running for example on IP address: 123.4.567.890 (Port 80 is Listening)
Docker container: An Jupyter Notebook application – the container is called for example APP-PN and can be accessed via the port: 8888 in docker.
It I run the application at my local server I can access the notebook application via the browser:
http://localhost:8888/files/dir1/app.html
However, when I run the application on the Google Cloud if I put:
http://123.4.567.890:8888/files/dir1/app.html
I cannot access it.
I tried all combinations open the port 8888 via TCP on the host as well as to expose the port via the docker run command – all of which did not work:
firewall-cmd --zone=public --add-port=8888/tcp --permanent
docker run -it -p 80:8888 APP-PN
docker run --expose 8888 -it -p 80:8888 APP-PN
Also I tried to change Apache to Listen to port 80 and 8888 but I got some errors.
However if I STOP the Apache Webserver and then run the command
docker run -it -p 80:8888 APP-PN
I can access the application simply in my browser via:
htttp://123.4.567.890/files/dir1/app.html
HERE is my question: I do not want to STOP my Apache Webserver and at the same time I want to access my docker container via the external port 8888.
Thanks in advance for all the help.
I didn't see in your examples a
docker run -it -p 8888:8888 APP-PN
The -p argument describes first the host port to listen on and then the container port to route to. If you want the host to listen on the same port as the container, -p 8888:8888 will get it done.
I'm a newbie at docker. I'm creating a Hello, World example. All I'm trying to do is bring up Apache in a docker and then view the default website from the host machine.
Dockerfile
FROM centos:latest
RUN yum install epel-release -y
RUN yum install wget -y
RUN yum install httpd -y
EXPOSE 80
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]
And then I build it:
> docker build .
And then I tag it:
docker tag 17283f566320 my:apache
And then I run it:
> docker run -p 80:9191 my:apache
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
It then runs....
In another terminal window, I attempt to issue the curl command to view the default web site.
> curl -XGET http://0.0.0.0:9191
curl: (7) Failed to connect to 0.0.0.0 port 9191: Connection refused
> curl -XGET http://localhost:9191
curl: (7) Failed to connect to localhost port 9191: Connection refused
> curl -XGET http://127.0.0.1:9191
curl: (7) Failed to connect to 127.0.0.1 port 9191: Connection refused
or I try localhost
Just to make sure that I got the port correct, I run this:
> docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5aed4063b1f6 my:apachep "/usr/sbin/httpd -D F" 43 seconds ago Up 42 seconds 80/tcp, 0.0.0.0:80->9191/tcp angry_hodgkin
Thanks to all. My ports were reversed:
> docker run -p 9191:80 my:apache
Despite you created the containers in your local machine. These are actually running on a different machine (a virtual machine)
First, check what is the IP of your docker machine (the virtual machine)
$docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp://192.168.99.100
Then run curl command to view the default web site on your apache web server inside the container
curl http://192.168.99.100:9191
If you are running docker on Ubuntu machine as native you should be able to access your container with localhost.
If you are using Mac or Windows your docker container runs not on local host but on its IP. you can get your container ip with command docker inspect <container id> | grep IPAddress or if your are using docker-machine docker-machine ip <docker_machine_name>
Related info:
http://networkstatic.net/10-examples-of-how-to-get-docker-container-ip-address/
https://docs.docker.com/machine/reference/ip/
How to get a Docker container's IP address from the host?
so your curl call should be something like this curl <container_ip>:<container_exposed_port>
also you can tag your image on build command with param -t like this:
docker build -t my:image .
Another tip you can optimize your dockerfile by combining yum install commands like this:
RUN yum install -y \
epel-release \
wget \
httpd
http://blog.tutum.co/2014/10/22/how-to-optimize-your-dockerfile/
I have installed OpenShift3 with Docker and Kubernetes with the ansible installer.
After the installation I want to create my docker registration on my master but I get the following error (I read it was something with SSL but I can't find a solution):
commands (from the sample):
[root#ip-10-0-0-x centos]# export CURL_CA_BUNDLE=`pwd`/openshift.local.config/master/ca.crt
[root#ip-10-0-0-x centos]# sudo chmod a+rwX openshift.local.config/master/admin.kubeconfig
[root#ip-10-0-0-x centos]# sudo chmod +r openshift.local.config/master/openshift-registry.kubeconfig
[root#ip-10-0-0-x centos]# oadm registry --create --credentials=openshift.local.config/master/openshift-registry.kubeconfig --config=openshift.local.config/master/admin.kubeconfig
error:
error: error getting client: couldn't read version from server: Get https://10.0.0.x:8443/api: x509: cannot validate certificate for 10.0.0.x because it doesn't contain any IP SANs
additional info
[root#ip-10-0-0-x centos]# kubectl version
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v1.1.0-alpha.0-1605-g44c91b1", GitCommit:"44c91b1", GitTreeState:"not a git tree"}
Server Version: version.Info{Major:"", Minor:"", GitVersion:"v1.1.0-alpha.0-1605-g44c91b1", GitCommit:"44c91b1", GitTreeState:"not a git tree"}
[root#ip-10-0-0-191 centos]# oc get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 172.30.0.1 <none> 443/TCP <none> 1d
[root#ip-10-0-0-x centos]# kubernetes apiserver
F0924 12:15:13.674745 75545 server.go:223] No --service-cluster-ip-range specified
The Ansible installer should generate certs for you that have the right IPs in the certs. Your local kubeconfig file (that oadm is using to connect to the server) should have been generated by the Ansible installer - can you verify that is the case? The file is in ~/.kube/config - does it point to the system that the Ansible installer used? Are you using an IaaS for OpenShift, deploying to local machines, or Vagrant?