Does Docker Cloud bring your own nodes need to all have the same OS? - docker-cloud

Currently, all our nodes are on Ubuntu, but I'm considering switching to CentOS. But I want to stagger the switch over.

Short answer: Yes.
See Introducing Docker Cloud
You can also provide your own node or nodes. This means you can use any Linux host connected to the Internet as a Docker Cloud node as long as you can install a Cloud agent. The agent registers itself with your Docker account, and allows you to use Docker Cloud to deploy containerized applications.

Related

Best practice for setting up kubernetes cluster on GPU workstation

I like to find out the current best practice for setting up a kubernetes cluster on a Dell Alienware Aurora workstation running Ubuntu 18.04 LTS for GPU based tensorflow workload. This will be a staging ground for my services/containers before I deploy them to a full-blown k8s cluster. I am not sure what the correct strategy for such a setup looks like. Here are some possibilities:
Minikube with virtualbox driver, worker node in VM
Minikube with --vm-driver=none, relying on docker
Kubeadm with scheduling pods on master enabled
Kubeadm-dind (docker in docker)
Update: added kubeadm options. Can someone also comment on the docker in docker solution. Will services/pods work seamlessly from docker in docker setup to multi-node remote machines/cloud instance setups?
Would love to hear from the kubernetes experts or someone familiar with tensorflow/GPU workloads on a single physical machine.
I'd go with 2 or 3 vm's and using kubeadm. You'll have real cluster to play with. There's some vagrant/ansible playbooks out there. GPU/Tensorflow is kinda new, so play ;)

Trying to find docker-compose remote API

Can I run docker-compose thru docker deamon remote socket?
I wasnt able to find anything on Engine AIP https://docs.docker.com/engine/api/v1.24/#310-tasks
In case docker does not support that, are you aware of any docker-compose remote API?
thanks in advance
Docker compose is just a utility that delegates the commands to the Docker daemon. Docker compose does not have a client sever architecture like Docker. It is only a client tool.
Thus there are no docker-compose apis. You can achieve everything by talking directly to the docker daemonexposed api.

Is it possible to connect physical nodes to a docker selenium hub

When I spin up an AWS instance with a docker selenium image, would I be able to connect nodes that are not running the docker image as a node?
For example, use an AWS instance with the docker-selenium image and then attempt to connect a MacBook to a node (with safari) assuming the networking has been set up correctly, would this work?
I don't see any reason why this would not work, as all interaction between nodes and hub is done through http as far as I remember.
Have you tried it?

Is it possible to deploy Spinnaker to an instance smaller than m4.xlarge on AWS?

We are currently following the default deployment instructions for Spinnaker which states using m4.xlarge as the instance type.
http://www.spinnaker.io/v1.0/docs/creating-a-spinnaker-instance#section-amazon-web-services
We did make an unsuccessful attempt to deploy it to m4.large but the services didnt start.
Has anyone tried the something similar and succeeded?
It really depends on the size of your cloud.
There are 4 core services that you need - gate/deck/orca/clouddriver. You can shut the other ones off if say, you don't care about automated triggers, baking or jenkins integration.
I'm able to run this locally with the docker images with about 8 gigs of ram and it works. Using s3 instead of cassandra also helps here.
You can play around with the settings in the baked image of spinnaker, but for internal demos and what not, I've been able to just spin up a VM, install docker and run the docker compose config correctly on m4.large.

The best way of develop with Open shift origin: VM or local installation

What is the best way to develop with open shift origin? Is it using vm or install it locally? I have tried installing the vm and I could not login to the vm. What is the default credential used to login to fedora vm.
Default credentials
Depending on which route you follow (see below) there might or might not be real authorization in place.
If you have the AllowAllPasswordIdentityProvider in place you can get away with test/test or whatever.
If you take the binary version (see below) this is what you'll have by default. I changed it to be HTPasswdPasswordIdentityProvider instead.
For the other options I think you will have a user called system, with the password admin coming with the setup.
Docker container version
You can quickly get OpenShift running in a Docker container using
images from Docker Hub on a Linux system. This method is supported on
Fedora, CentOS, and Red Hat Enterprise Linux (RHEL) hosts only.
Link: https://docs.openshift.org/latest/getting_started/administrators.html#running-in-a-docker-container
As per the origin folks, this setup is not (yet) a full example, but very easy to get started with. You should be able to follow the instructions to get an all-in-one instance up and running in no time. However, this approach cannot teach you how to create a cluster (master(s) and node(s))
Vagrant VM
This image is based off of OpenShift Origin and is a fully functioning
OpenShift instance with an integrated Docker registry. The intent of
this project is to allow Web developers and other interested parties
to run OpenShift V3 on their own computer. Given the way it is
configured, the VM will appear to your local machine as if it was
running somewhere off the machine.
The OpenShift Master, Node, Docker Registry, and other pieces are running in one VM. Given it's focus on application developers, it should NOT be used in production.
Link: https://www.openshift.org/vm
Binary option
Red Hat periodically publishes binaries to GitHub, which you can
download on the OpenShift Origin Releases page.
Link: https://github.com/openshift/origin/releases
This is the option I follow currently. You download the binaries, install GO, then setup the OC client tools. Next step you generate the configuration files and start adding your system components (router, ...).
Follow this page to understand the basics:
Link: https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
Ansible route
For production installation you probably want to install your cluster via Ansible.
My humble advice is to do this once you got a bit of an experience via configuring by hand (see previous point). Let's hear some people with more experience though.
Link: https://docs.openshift.org/latest/install_config/install/index.html
Documentation in general
Link: https://docs.openshift.org/latest/install_config/master_node_configuration.html#creating-new-configuration-files
Spin up a Centos.7 VM, download the latest origin tools:
wget https://github.com/openshift/origin/releases/download/v1.3.0-alpha.2/openshift-origin-client-tools-v1.3.0-alpha.2-983578e-linux-64bit.tar.gz
tar xzvf openshift-origin-client-tools-v1.3.0-alpha.2-983578e-linux-64bit.tar.gz
ln -s /root/openshift-origin-client-tools-v1.3.0-alpha.2-983578e-linux-64bit/oc /usr/local/bin/oc
chmod 755 /root/openshift-origin-client-tools-v1.3.0-alpha.2-983578e-linux-64bit/oc
Bring up your single node origin cluster:
oc cluster up --use-existing-config --host-data-dir=/var/tmp/etcd
Login using the instructions provided.