Fuse Fabric8 Clustering - automation

I am noob in fabric8. I have a doubt regarding clustering with docker images.
I have pulled the docker image for fabric8 fabric8/fabric8. I just want to make the containers i launch to automatically fall into the same cluster without using fabric:create and fabric:join.
Say if i launch 3 containers of fabric8/fabric8 they should fall under the same cluster without manual configuration.
Please give some links are references. I'm lost.
Thanks in advance

In fabric8 v1 the idea was that you create a fabric, using the fabric:create command and then you spin docker container, using the docker container provider in pretty much the same way as you were doing with child containers (either using the container-create-docker command or using hawtio and selecting docker as the container type).

Related

Google cloud kubernetes cluster newbie question

I am a newbie of GKE. I created a GKE cluster with very simple setup. It only has on gpu node and all other stuff was default. After the cluster is up, I was able to list the nodes and ssh into the nodes. But I have two questions here.
I tried to install nvidia driver using the command:
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml
It output that:
kubectl apply --filename https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml
daemonset.apps/nvidia-driver-installer configured
But 'nvidia-smi' cannot be found at all. Should I do something else to make it work?
On the worker node, there wasn't the .kube directory and the file 'config'. I had to copy it from the master node to the worker node to make things work. And the config file on the master node automatically updates so I have to copy again and again. Did I miss some steps in the creation of the cluster or how to resolve this problem?
I appreciate someone can shed some light on this. It drove me crazy after working on it for several days.
Tons of thanks.
Alex.
For the DaemonSet to work, you need to have a tag on your worker Node as cloud.google.com/gke-accelerator (see this line). The DaemonSet checks for this tag on a node before scheduling any pods for installing the driver. I'm guessing a default node pool you create did not have this tag on it. You can find more details on this on the GKE docs here.
The worker nodes, by design are just that worker nodes. They do not need privileged access to the Kubernetes API so they don't need any kubeconfig files. The communication between worker nodes and the API is strictly controlled through the kubelet binary running on the node. Therefore, you will never find kubeconfig files on a worker node. Also, you should never put them on the worker node either, since if a node gets compromised, the keys in that file can be used to damage the API Server. Instead, you should make it a habit to either use the master nodes for kubectl commands, or better yet, have the kubeconfig on your local machine, and keep it safe, and issue commands remotely to your cluster.
After all, all you need is access to an API endpoint for your Kubernetes API server, and it shouldn't matter where you access it from, as long as the endpoint is reachable. So, there is no need whatsoever to have kubeconfig on the worker nodes :)

Azure Container Instances Islolate containers in side group

I want to be able to deploy an ACI container group but I want none of the containers in the group to be able to communicate with one another. According to the documentation, containers can communicate on any port even if it's not exposed. Is there a way to lock down all containers within a group?
For your requirements, I don't think there is an appropriate way to achieve it through the ACI. Maybe you can install the firewall in the image and use it. But it's not good and it will make the image bigger.
I recommend you take a try to the AKS, it has the network policy between the pods. And you can deploy the images with only one container for each of them. You can get more details from the Network policies of the AKS.

How to create Redis cluster in DC/OS

I was able to create a cluster of Redis instances in my local machine.
But I was wondering of how we can achieve this in Pass environment i.e. in DC/OS?
Any help will be very helpful.
If you're specifically looking at DC/OS, you can have a look at the example at https://github.com/dcos/examples/tree/master/redis which covers some of the basic components as you get started.

Using Kubernetes or Apache mesos

We have a product which is described in some docker files, which can create the necessary docker containers. Some docker containers will just run some basic apps, while other containers will run clusters (hadoop).
Now is the question which cluster manager I need to use.
Kubernetes or Apache mesos or both?
I read Kubernetes is good for 100% containerized environments, while Apache Mesos is better for environments which are a bit containerized and a bit not-containerized. But Apache Mesos is better for running hadoop in docker (?).
Our environment is composed of only docker containers, but some with an hadoop cluster and some with some apps.
What will be the best?
Both functionally do the same, orchestrate Docker containers, but obviously they will do it in different ways and what you can easily achieve with one, it might prove difficult in the other and vice versa.
Mesos has a higher complexity and learning curve in my opinion. Kubernetes is relatively simpler and easier to grasp. You can literally spawn your own Kube master and minions running one command and specifying the provider: Vagrant or AWS,etc. Kubernetes is also able to be integrated into Mesos, so there is also the possibility where you could try both.
For the Hadoop specific use case you mention, Mesos might have an edge, it might integrate better in the Apache ecosystem, Mesos and Spark were created by the same minds.
Final thoughts: start with Kube, progressively exploring how to make it work for your use case. Then, after you have a good grasp on it, do the same with Mesos. You might end up liking pieces of each and you can have them coexist, or find that Kube is enough for what you need.

Two Logstash instances on same Docker container

Am wondering if there is a way two logstash processes with separate configurations can be run on a single Docker container.
My setup has a Logstash process using file as input sending events to Redis and from there to second Logstash process and over to custom http process. So, Logstash --> Redis --> Logstash --> Http. Was hoping to keep the two Logstash instances and Redis on the same Docker container. Am still new to Docker & Would highly appreciate any inputs / feedback on the same.
This would be more complicated than it needs to be. It is much simpler in the Docker world to run three containers to do three things than to run one container that does them all. It is possible though-
You need to run an init process in your container to control multiple processes, and launch that as your container's entry point. The init will have to know how to launch the processes you are interested in, both logstash and the redis. Basimage/phusion provides an image with a good init system, but the launch scripts are based on runit and can be hard to pick up.
If you wanted to only run a single process, you can use a docker-compose file to launch all three processes and link them together.