I like to find out the current best practice for setting up a kubernetes cluster on a Dell Alienware Aurora workstation running Ubuntu 18.04 LTS for GPU based tensorflow workload. This will be a staging ground for my services/containers before I deploy them to a full-blown k8s cluster. I am not sure what the correct strategy for such a setup looks like. Here are some possibilities:
Minikube with virtualbox driver, worker node in VM
Minikube with --vm-driver=none, relying on docker
Kubeadm with scheduling pods on master enabled
Kubeadm-dind (docker in docker)
Update: added kubeadm options. Can someone also comment on the docker in docker solution. Will services/pods work seamlessly from docker in docker setup to multi-node remote machines/cloud instance setups?
Would love to hear from the kubernetes experts or someone familiar with tensorflow/GPU workloads on a single physical machine.
I'd go with 2 or 3 vm's and using kubeadm. You'll have real cluster to play with. There's some vagrant/ansible playbooks out there. GPU/Tensorflow is kinda new, so play ;)
Related
Can any one of you please tell me where I can find the way to monitor docker images with falco? Now I'm using Ubuntu for testing purposes, but in the end I want to use it in AWS Fargate environment.
Thanks
Help on this from the community
Currently, all our nodes are on Ubuntu, but I'm considering switching to CentOS. But I want to stagger the switch over.
Short answer: Yes.
See Introducing Docker Cloud
You can also provide your own node or nodes. This means you can use any Linux host connected to the Internet as a Docker Cloud node as long as you can install a Cloud agent. The agent registers itself with your Docker account, and allows you to use Docker Cloud to deploy containerized applications.
We are currently following the default deployment instructions for Spinnaker which states using m4.xlarge as the instance type.
http://www.spinnaker.io/v1.0/docs/creating-a-spinnaker-instance#section-amazon-web-services
We did make an unsuccessful attempt to deploy it to m4.large but the services didnt start.
Has anyone tried the something similar and succeeded?
It really depends on the size of your cloud.
There are 4 core services that you need - gate/deck/orca/clouddriver. You can shut the other ones off if say, you don't care about automated triggers, baking or jenkins integration.
I'm able to run this locally with the docker images with about 8 gigs of ram and it works. Using s3 instead of cassandra also helps here.
You can play around with the settings in the baked image of spinnaker, but for internal demos and what not, I've been able to just spin up a VM, install docker and run the docker compose config correctly on m4.large.
What is the best way to develop with open shift origin? Is it using vm or install it locally? I have tried installing the vm and I could not login to the vm. What is the default credential used to login to fedora vm.
Default credentials
Depending on which route you follow (see below) there might or might not be real authorization in place.
If you have the AllowAllPasswordIdentityProvider in place you can get away with test/test or whatever.
If you take the binary version (see below) this is what you'll have by default. I changed it to be HTPasswdPasswordIdentityProvider instead.
For the other options I think you will have a user called system, with the password admin coming with the setup.
Docker container version
You can quickly get OpenShift running in a Docker container using
images from Docker Hub on a Linux system. This method is supported on
Fedora, CentOS, and Red Hat Enterprise Linux (RHEL) hosts only.
Link: https://docs.openshift.org/latest/getting_started/administrators.html#running-in-a-docker-container
As per the origin folks, this setup is not (yet) a full example, but very easy to get started with. You should be able to follow the instructions to get an all-in-one instance up and running in no time. However, this approach cannot teach you how to create a cluster (master(s) and node(s))
Vagrant VM
This image is based off of OpenShift Origin and is a fully functioning
OpenShift instance with an integrated Docker registry. The intent of
this project is to allow Web developers and other interested parties
to run OpenShift V3 on their own computer. Given the way it is
configured, the VM will appear to your local machine as if it was
running somewhere off the machine.
The OpenShift Master, Node, Docker Registry, and other pieces are running in one VM. Given it's focus on application developers, it should NOT be used in production.
Link: https://www.openshift.org/vm
Binary option
Red Hat periodically publishes binaries to GitHub, which you can
download on the OpenShift Origin Releases page.
Link: https://github.com/openshift/origin/releases
This is the option I follow currently. You download the binaries, install GO, then setup the OC client tools. Next step you generate the configuration files and start adding your system components (router, ...).
Follow this page to understand the basics:
Link: https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
Ansible route
For production installation you probably want to install your cluster via Ansible.
My humble advice is to do this once you got a bit of an experience via configuring by hand (see previous point). Let's hear some people with more experience though.
Link: https://docs.openshift.org/latest/install_config/install/index.html
Documentation in general
Link: https://docs.openshift.org/latest/install_config/master_node_configuration.html#creating-new-configuration-files
Spin up a Centos.7 VM, download the latest origin tools:
wget https://github.com/openshift/origin/releases/download/v1.3.0-alpha.2/openshift-origin-client-tools-v1.3.0-alpha.2-983578e-linux-64bit.tar.gz
tar xzvf openshift-origin-client-tools-v1.3.0-alpha.2-983578e-linux-64bit.tar.gz
ln -s /root/openshift-origin-client-tools-v1.3.0-alpha.2-983578e-linux-64bit/oc /usr/local/bin/oc
chmod 755 /root/openshift-origin-client-tools-v1.3.0-alpha.2-983578e-linux-64bit/oc
Bring up your single node origin cluster:
oc cluster up --use-existing-config --host-data-dir=/var/tmp/etcd
Login using the instructions provided.
I like the Docker Hub with dockerfiles idea very much.
Is there a similar way to get a small working linux VirtualBox instance in a few commands, that could also be controlled from a command line?
Vagrant is a great tool that does just what you want and much more! It's a ruby application written for fast and simple setup of minimal development environments.
By default it creates VirtualBox images, but it supports VMWare and many others too. The whole setup of a box is managed by a single Vagrantfile! Your vm options, network settings and provisioning is done there.
Setting up a virtualbox box is as easy as executing just two shell commands. Checkout the Getting Started Guide for an example using Ubuntu.
You can use a vast range of prepared images from the Hashicorp Atlas or build your owns.
Also, vagrant doesn't limit you to one virtual machine per development setup, it enables you to model cluster setups on a single machine using multiple vms. I myself use docker for that part though.
Edit: fixed a typo :<