I am trying to deploy control center as a Docker container.
reference : https://www.gridgain.com/docs/control-center/latest/installation/docker
Above configuration works if docker and GridGain are running on the same host.
I am trying to deploy docker on a different host other than the GridGain nodes.
Which parameter needs to be updated/changed in order to connect control center to the GridGain server?
It's actually the other way around: the cluster connects to Control Center.
In your cluster you need to tell is where it can find the Control Center front end:
{GRIDGAIN_HOME}/bin/management.sh --uri https://control_center_uri:8008
It's in the documentation here.
Related
We are using traefik for simulating our production environment. We have multiple services running in kubernetes running in docker. Few of them are java applications. In this stack, a developer can come and deploy the code as per the git branches they are working on. So at a given point, we can have 100s of full fledged stack running. We use traefik for certificates resolution so that each stack can be hosted based on branch names and all.
Now I want to give developer the facility to debug their java applications. Its fairly simple to do it in java. You need to attach java agent while starting up the docker image for application. Basically we need to pass -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=37000 as JVM argument and JVM is ready to attach remote debuggers.
Now JVM is using JDWP protocol. And as far as I understand, it is a tcp protocol. Now my problem is: I want to traefik to create routes dynamically based on my docker service labels. That is also I am able to figure out. I used these labels in the docker service.
And this is how you connect to JVM remotely.
Now if in RULE, if is use HostSNI(*) then I cam able to connect to the container. But problem is when I am doing remote connection for debugging, traefik can direct my request to any container. And this whole thing won't work as expected.
I believe we must have some other supported function for TCP rule as well, apart from only HostSNI. What is your opinion on this ? Or Have I missed something here ?
I have an existing ACI. Can I add it to a VNET and Subnet via Azure CLI/Azure cloud shell.
Unfortunately, this isn't possible right now from the CLI, the az container commands don't support patching / upgrading an existing container instance or container group for the network profile property
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-update#properties-that-require-container-delete
If you are interested in not loosing the traffic going to your existing ACI, you should deploy a new ACI to a private VNET/Subnet and front both your containers using an applicationGW or Load Balancer on your new VNET/Subnet.
Once ready to make the move you should be direct all traffic to your ACI running in the VNET.
in glassfish 3.1 I have two instance on two ssh node and they work fine in a cluster. I created the third ssh node and add the instances in a cluster. SO the cluster has three instances on three remote ssh node.
Web service running on the third node but web service cant connect to database. I believe the new instances has same connector and configuration, resources as other two since the instance is added into the cluster. So all sharing same cluster config.
I am new in glassfish, please help me out.
Thanks
Currently I can use rsub with sublime to edit remotely but the container is a second layer of ssh that is only accessible from the host machine.
Just curious, how do you use your remote host machine if you even have no ssh running on it?
Regarding to your question, I think you need to install openssh-server directly inside the container and map container's 22 port to the host's custom port. Inside your container you'll have to run some initial process that will launch all the processes you need (like openssh-server).
Consider this comprehensive example of the use of supervisord inside Docker container.
Hi I'm currently working on a side project. In this project I'll have a central server that will need to connect to several remote docker daemons. My problem is with authentication.
Given that the project will be hosted on Digitalocean, my first thought suggested that I'll accept only connections from the private networking interface. The problem is that that interface is accessible by all other servers in the same datacenter.
Second thought is to allow only requests from the central server using the DOCKER_HOST config, the problem is that if I understand correctly the if the private IP of the centeral server get known, the IP can be spoofed.
Third thought is to enable TLS ( https://docs.docker.com/articles/https/ ), I've never dealt with those things before and the tutorial is unclear for me, I lack the knowledge of the terminologies and it's being used heavily.
So basically the problem is that I have a central client and multiple remote docker hosts, what is the best way to connect to them? Thank you.
EDIT: I managed to solve the problem using HTTP authentication by running nginx as a proxy in front of the docker daemon.
My understand is you are trying to build a docker cluster, which can manage all nodes from one single central server.
this is very likely docker's Docker Swarm project, from their doc, they give some simple idea how this is work:
open a TCP port on each node for communication with the swarm manager
install Docker on each node
create and manage TLS certificates to secure your swarm
Sorry this should post as a comment but I do not have enough rep to do that.