How to send Filebeat, Meticbeat and Packetbeat data to Fluentd daemonset deployed on 3 node Kubernetes cluster? - filebeat

I have a 3 nodes Kubernetes cluster, on which I have fluentd deployed as damonset. This fluentd is tailing all the container logs on Kubernetes cluster and sending it to Elasticsearch deployed on the same Kubernetes cluster. I have an external Linux server which has Filebeat, Metricbeat and Packetbeat and I want to sends its beats data to the Fluentd running on my Kubernetes cluster, so that the beats data could be stored in the Elasticsearch. How to do that? Fluentd has elasticbeats plugin. But I am not sure how to expose the fluentd outside the kubernetes cluster (probably it would need some service), so that it can accept the beats data from the Linux server.

Related

How to migrate rabbitmq (everything) to bitnami rabbitmq cluster

How to migrate rabbitmq (everything) to bitnami rabbitmq cluster?
I have single node rabbitmq server with plenty of queues, exchanges, channels, producers and consumers.
We have a task to migrate everything of this rabbitmq server to a bitnami helm chart(https://bitnami.com/stack/rabbitmq/helm) rabbitmq with 3 node cluster, running in kubernetes cluster.
Current rabbitmq server hostname is different from what rabbitmq hostnames of three rabbitmq cluster nodes.
I'm a beginner with rabbitmq aspects.
So how do we approach this?
how to migrate data of a rabbitmq server whose hostname will be different from target rabbitmq server of bitnami helm chart cluster?

How to deploy 2 services in an Apache Ignite cluster

I have a spring boot service that configures ignite at startup and executes Ignition.start(). There are 2 more services also on spring boot that need to be placed in one Ignite cluster. How can I do this?
You are running Apache Ignite in embedded mode using maven dependency. For sharing the same Ignite instance across services, you need to create an Ignite cluster in distributed mode and then connect to the same Cluster from all the services using Thin/Thick client as per your need.
For e.g Creating Ignite cluster using Docker refer to the link: https://ignite.apache.org/docs/latest/installation/installing-using-docker
There are other options available to create Ignite cluster.
Once the Cluster is created then you can use a Thick/Thin client to connect to the same cluster.
Please refer :
https://www.gridgain.com/docs/latest/getting-started/concepts

How to do Redis slave repalication in k8s cluster?

By this famous guestbook example:
https://github.com/kubernetes/examples/tree/master/guestbook
It will create Redis master/slave deployment and services. It also has a subfolder named redis-slave which used for create a docker image and run Redis replication command.
Dockerfile
run.sh
The question is, if deployed the Redis master and slave to the k8s cluster. Then how to run that command? Deploy a new container? That will not relate to the slave container already deployed.
Is there a better way to do Redis repliaciton between master and slave running in k8s cluster?
One option you have is using helm to deploy the redis-ha app.
Info about helm: https://github.com/kubernetes/helm
The redis-ha helm app page: https://hub.kubeapps.com/charts/stable/redis-ha
Redis Sentinel is often suggested for simple master-slave replication and high availability.
Unfortunately, Sentinel does not fit Kubernetes world well and it also requires a Sentinel-aware client to talk to Redis.
You could try Redis operator which can be considered a Kubernetes-native replacement for Sentinel and allows to create a Redis deployment that would resist without human intervention to most kind of failures.
Here is how you can setup Redis HA Master Slave Cluster in Kubernetes/Openshift OKD
Basically you have to use configMap, StatefulSet in collaboration with VolumeClaims
https://reachmnadeem.wordpress.com/2020/10/01/redis-ha-master-slave-cluster-up-and-running-in-openshift-okd-kubernetes/

How is Kubernetes API setup within Rancher

As a proof of concept we are trying Kubernetes with Rancher.
Currently we have in total 10 machines for the environment.
3 machines for ETCD (labels: etcd=true)
3 machines for API (labels: orchestration=true )
4 machines as K8S worker nodes (lables: compute=true)
I need to evaluate how is the Kubernetes API being set in Rancher environment. From the K8S Dashboard there is only service "kubernetes" running in the "default" namespace on port 443.
I want to know how which containers are used by Rancher to run the API on ?
What HA model is used on the hosts with labels orchestration=true (master-master, master-slave )? API communication flow ? What can external user can get from it ?
Would be grateful for any kind of tips, links and docs.

Redis cluster on kubernetes

I am trying to setup redis cluster on Kubernetes. One of my requirements is that my redis cluster should be resilient in case of kubernetes cluster restart(due to issues like power failure).
I have tried Kubernetes statefulset and deployment.
In case of statefulset, on reboot a new set of IP addresses are assigned to Pods and since redis cluster works on IP addresses, it is not able to connect to other redis instance and form cluster again.
In case of services with static IP over individual redis instance deployment, again redis stores IP of Pod even when I created cluster using static service IP addresses, so on reboot it is not able to connect to other redis instance and form cluster again.
My redis-cluster statefulset config
My redis-cluster deployment config
Redis-4.0.0 has solved this problem by adding support for cluster announce node IP and Port
Set cluster-announce-ip as static IP of service over redis instance kubernetes deployment.
Link to setup instructions: https://github.com/zuxqoj/kubernetes-redis-cluster/blob/master/README-using-statefulset.md
Are you able to use DNS names instead of IP addresses? I think that is the preferred way to route your traffic to individual nodes in a statefulset:
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id