Can not access rabbitmq management configured in docker even though its status is running - rabbitmq

I am new docker world and I am attempting to access the RabbitMQ Management plugin on my windows 10. I am following this. But when I try "http://container-ip:15672 " I can not access to management.
Anyone has any experience with such problem?

If you started docker with
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3-management
as described in the reference you mentioned, you might need to add a -p 15672:15672 to that command line in order to make the management port accessible from the host.
I just ran into the same problem as docker newbie on windows 10 and found that solution here.

Related

How can I ssh into a container running inside an OpenShift/Kubernetes cluster?

I want to be able to ssh into a container within an OpenShift pod.
I do know that, I can simply do so using oc rsh. But this is based on the assumption that I have the openshift cli installed on the node where I want to ssh into the container from.
But what I want to actually achieve is, to ssh into a container from a node that does not have openshift cli installed. The node is on the same network as that of OpenShift. The node does have access to web applications hosted on a container (just for the sake of example). But instead of web access, I would like to have ssh access.
Is there any way that this can be achieved?
Unlike a server, which is running an entire operating system on real or virtualized hardware, a container is nothing more than a single Linux process encapsulated by a few kernel features: CGroups, Namespacing, and SELinux. A "fancy" process if you will.
Opening a shell session into a container is not quite the same as opening an ssh connection to a server. Opening a shell into a container requires starting a shell process and assigning it to the same cgroups and namespaces on the same host as the container process and then presenting that session to you, which is not something ssh is designed for.
Using oc exec, kubectl exec, podman exec, or docker exec cli commands to open a shell session inside a running container is the method that should be used to connect with running containers.

How do I connect to a docker container running Apache Drill remotely

On Machine A, I run
$ docker run -i --name drill-1.14.0 -p 8047:8047
--detach -t drill/apache-drill:1.14.0 /bin/bash
<displays container ID>
$ docker exec -it drill-1.14.0 bash
<connects to container>
$ /opt/drill/bin/drill-localhost
My question is, how do I, from Machine B run
docker exec -it drill-1.14.0 bash
on Machine A - I've looked trough the help pages, but nothing is clicking.
Both machines are Windows (10 x64) machines.
You need to ssh or otherwise securely connect from machine B to machine A, and then run the relevant Docker command there. There isn't a safe shortcut around this.
Remember that being able to run any Docker command at all implies root-level access over the system (you can docker run -u root -v /:/host ... and see or change any host-system files you want). Usually there's some control over who exactly can run Docker commands because of this. It's possible to open up a networked Docker socket, but extremely dangerous: now anyone who can reach that socket over the network can, say, change the host's password and sudoers files to allow a passwordless root-equivalent ssh login. (Google News brought me an article a week or two ago about attackers looking for open Docker network sockets and using them to turn machines into cryptocurrency miners, for instance.)
If you're building a service, and you expect users to interact with it remotely, then you probably need to make whatever interfaces available as network requests and not by running local shell commands. For instance, it's common for HTTP-based services to have a /admin set of URL paths that require a separate password authentication or otherwise different privileges.
If you're trying to administer a service via its local config files, often the best path is to store the config files on the host system, use docker run -v to inject them into the container, and when you need to change them, docker stop; docker rm; docker run the container to get a new copy of it with a new config file.
If you're packaging some application, but the primary way to interact with it is via CLI tools and local files, consider whether you actually want to use a tool that isolates the application's filesystem from the host's and requires root-level access to interact with it at all. The tooling for installing semi-isolated tools in your choice of scripting language is pretty mature, and for compiled languages quite well-established; there's nothing wrong with installing software on your host system.

connect opscenter and datastax agent runs in two docker containers

There two containers which is running in two physical machines.One container for Ops-center and other is for (datastax Cassandra + Ops-center agent).I have have manually installed Ops-center agent on each Cassandra containers.This setup is working fine.
But Ops-center can not upgrade nodes due to fail ssh connections to nodes. Is there any way create ssh connection between those two containers. ??
In Docker you should NOT run SSH, read HERE why. After reading that and you still want to run SSH you can, but it is not the same as running it on Linux/Unix. This article has several options.
If you still want to SSH into your container read THIS and follow the instructions. It will install OpenSSH. You then configure it and generate a SSH key that you will copy/paste into the Datastax Opscenter Agent upgrade dialog box when prompted for security credentials.
Lastly, upgrading the Agent is as simple as moving the latest Agent JAR or the version of the Agent JAR you want to run into the Datastax-agent Bin directory. You can do that manually and redeploy your container much simpler than using SSH.
Hope that helps,
Pat

Google Cloud server (GCE), custom image, SSH login issue

I'm playing with Google Compute Engine(GCE) as I'm planning to migrate the cloud service provider from Rackspace(reason: GCE has good upgrade plans with best discount price).
I have few issues with GCE and one of them is Ubuntu os/image not supported by default. But there is an alternate method to run any linux distro in GCE, which is called Building an image from scratch for uploading custom images and creating instances(servers) from uploaded image.
I could able to create and run the instances from the Ubuntu image I uploaded to GCE following the link hagikuratakeshi.hatenablog.com. This is simply running ubuntu in general. I didn't face any problem but google's gcutil tool prompts for ssh passphrase and adds the key in GCE meta data but accepts only password logins(then why it prompts for passphrase).
I want to strictly follow Building an image from scratch as recommended by google. But after following all the steps, I could not able to login to my server instance via SSH. I guess this happens when I install Google Compute Engine image packages: google-startup-scripts_1.1.2-1_all.deb, google-compute-daemon_1.1.2-1_all.deb & python-gcimagebundle_1.1.2-1_all.deb. These packages/scripts make some changes to the instance at the startup and also to SSH configuration which are Strongly recommended. Once I strictly follow the link or once I install these packages I could not able to establish SSH connection once the instance is rebooted. The error message similar to the one below is shown while trying to connect:
test#machine1:~$ gcutil --service_version="v1" --project="mypro-555" ssh --zone="asia-east1-a" "server-instance-1"
INFO: Running command line: ssh o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/test/.ssh/google_compute_engine -A -p 22 test#101.167.xxx.xxx -
ssh: connect to host 101.167.xxx.xxx port 22: Connection refused
NOTE: The user account test is available and common on both local and GCE server!.
My main problem is SSH connection when I strictly follow the steps. If I upload the fresh image and then follow the recommended steps connecting SSH, I could not do SSH again once I restart the instance (or) if I setup everything in the uploaded image before uploading, the created instance will be running but I could not able to connect atleast ones and the error is same.
Anybody using GCE with your custom image?, are you allowed to connected even after following the recommended settings?. Anyone already fixed this SSH issue?. Please post your comments!
EDIT 1
I could not figure out from the logs and here is the output of gcutil getserialportoutput server-instance-1.
The key here is that your ssh client says "connection refused". This indicates that there is indeed a machine at that IP address, but it's not accepting SSH connections. There are a few possible explanations:
The ssh daemon isn't running, or is listening on the wrong interface
Your instance is configured with a firewall that's denying SSH traffic
The GCE firewall rule to allow SSH traffic has been removed

How to change server for my application in cloudbees?

I want to change the server from Jboss7.1 to Tomcat7 in cloudbees. What are the ways to do this? To mention that my application is already deployed and running.
You can deploy in Tomcat 7 through the CloudBees SDK using this command.
bees app:deploy -t tomcat7 -a app.war
Be aware that your app should be adapted to work with both containers.