I was trying to run GD.CN on premise without internet connection, but I wasn't successful. Is it even possible? Thanks for your answers!
Yes, GoodData.CN can run on air-gapped environments. You need to preload all required docker images (both gooddata and 3rd party) to your private docker registry and deploy helm chart pointing configuration to this private registry. Here you can find more details.
Related
Does Rundeck have to be online or I can simply host it on a local VM? If it has to remain online, then why? or If it can be kept on a local VM would that work? if not, then why?
Rundeck needs to be online to execute the workflows that you define, whether in a virtual environment or not. Just make sure that the Rundeck instance can access the remote nodes.
A good way to test Rundeck is to use the official Docker image.
I've a local clusters (minikube) that work pefectly well on my laptop (mint 19.3, Intellij 2019.3 with cloud code plugin, java (11) backend, mongo db, front end, .. ok ). But I can't find any usefull informations (on google cloud plateform site or intellij) to configure a new google cloud cluster. I can only see my minikube conf on the cluster explorer...even when I stopped minikube !
It seems that configuration could be found in kubctl !? But how can I force plugin to connect GCP. I've a GCP account and created a cluster and an image repo.
GCP documentation looks really unclear.
I solved the problem. You need to install SDK cloud ( an other solution ?), an use gcloud instructions to link kubctl with new kubernetes context, and for credentials contexts. a new configuration for kubctl must be generated, and you have to switch to that configuration (kubectl config set your-new-cluster).
Just one thing, to use google storage for docker images, you should enter where to find or put it in the conf of the run/edit configuration line image options -> gcr.io/your-project-id . I couldn't use the bucket i created before pushing, a new one was created. Is there a solution to connect with an existing bucket ?
If you want to manage your clusters from an on-prem machine you will need to install Cloud SDK and configure your cluster access, this will allow you to use kubectl comands to create, and administrate the clusters on GKE. Cloud code plugin should install this SDK automatically, you can take a look to this guide to learn hoy to use it.
My client is currently evulating AKS which seems to be really promising. Our current platform is based on Azure VM's we provision ourselves. We would like to create private communication between both our existing platform and the managed AKS cluster but so far that does not seem to be supported yet.
Some example use cases for us are:
- Proxying incoming HTTP traffic via our main entrypoint, a Varnish server, to the new AKS environment so we don't have to change url's
- Accessing non publically exposed API's from the AKS environment
Right now the AKS cluster is it's a different subscription and resource group than other parts of our platform. The main reason we we can't connect though seems to be that it's not possible to specify which private IP range should be used when creating an AKS cluster.
Is there support planned for this or is there a reliable workaround?
Thanks for the inquiry, there's a workaround for the stated case, it's through the use of ACS Engine, "ACS Engine, for Azure Container Service Engine, is a CLI tool that helps to generate Azure Resource Manager templates to deploy Docker enabled clusters on Microsoft Azure. It works with all the orchestrators supported by ACS: Docker Swarm, Mesosphere DC/OS and Kubernetes"
So using this solution will allow you to integrate Azure Container Service Cluster into an existing Virtual Network.More details and step by step guide can be found here: https://blogs.msdn.microsoft.com/jcorioland/2017/01/10/how-to-integrate-a-new-azure-container-service-cluster-into-an-existing-virtual-network-using-acs-engine/
I have a VM running Windows Server 2016 Technical Preview, and have installed the Containers feature, and then run the Install-ContainerHost.ps1 script from Microsoft's container tools repo
https://github.com/Microsoft/Virtualization-Documentation/tree/master/windows-server-container-tools/Install-ContainerHost
I can now run the Docker Deamon on Windows. Next I want to copy the certificates to a client machine so that I can issue commands to the host remotely. But I don't know where the certificates are stored on the host.
In the script the path variable is set to %ProgramData%\docker\certs.d
The certificates on windows are located in the .docker folder in the current user directory.
docker --help command will show the exact path details
AFAIK there are no certificates generated when you do what you are doing. If you drop certificates in the path you found then it will use them, and be secured. But otherwise there is none on the machine. Which explains why it isn't exposed by default.
On my setup I connected without TLS but that was on a VM that I could only access on my dev machine. Obviously anything able to be accessed over a network shouldn't do that.
Other people doing this are here: https://social.msdn.microsoft.com/Forums/en-US/84ca60c0-c54d-4513-bc02-14bd57676621/connect-docker-client-to-windows-server-2016-container-engine?forum=windowscontainers and here https://social.msdn.microsoft.com/Forums/en-US/9caf90c9-81e8-4998-abe5-837fbfde03a8/can-i-connect-docker-from-remote-docker-client?forum=windowscontainers
When I dug into the work in progress post it has this:
Docker clients unsecured by default
In this pre-release, docker communication is public if you know where to look.
https://msdn.microsoft.com/en-us/virtualization/windowscontainers/about/work_in_progress#DockermanagementDockerclientsunsecuredbydefault
So eventually this should get better.
Hi I'm currently working on a side project. In this project I'll have a central server that will need to connect to several remote docker daemons. My problem is with authentication.
Given that the project will be hosted on Digitalocean, my first thought suggested that I'll accept only connections from the private networking interface. The problem is that that interface is accessible by all other servers in the same datacenter.
Second thought is to allow only requests from the central server using the DOCKER_HOST config, the problem is that if I understand correctly the if the private IP of the centeral server get known, the IP can be spoofed.
Third thought is to enable TLS ( https://docs.docker.com/articles/https/ ), I've never dealt with those things before and the tutorial is unclear for me, I lack the knowledge of the terminologies and it's being used heavily.
So basically the problem is that I have a central client and multiple remote docker hosts, what is the best way to connect to them? Thank you.
EDIT: I managed to solve the problem using HTTP authentication by running nginx as a proxy in front of the docker daemon.
My understand is you are trying to build a docker cluster, which can manage all nodes from one single central server.
this is very likely docker's Docker Swarm project, from their doc, they give some simple idea how this is work:
open a TCP port on each node for communication with the swarm manager
install Docker on each node
create and manage TLS certificates to secure your swarm
Sorry this should post as a comment but I do not have enough rep to do that.