I'm trying to create a RabbitMQ cluster.
The instances have been set up identically (They've been installed identically), they can resolve eachother's hostnames (Both with digand rabbitmqclt resolve_hostname) and their cookie hash is the same.
I'm wondering whether or not there are more steps to setting up a RabbitMQ cluster when in EC2.
I'm running RabbitMQ 3.9.13 and Ubuntu 20.04
Thank you all in advance
-brej
Basically, it should be sufficient, make sure to declare all these settings the config file of RabbitMQ, this way, each time it start, it will be able to reconnect to the cluster when needed.
Related
I am a newbie of GKE. I created a GKE cluster with very simple setup. It only has on gpu node and all other stuff was default. After the cluster is up, I was able to list the nodes and ssh into the nodes. But I have two questions here.
I tried to install nvidia driver using the command:
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml
It output that:
kubectl apply --filename https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml
daemonset.apps/nvidia-driver-installer configured
But 'nvidia-smi' cannot be found at all. Should I do something else to make it work?
On the worker node, there wasn't the .kube directory and the file 'config'. I had to copy it from the master node to the worker node to make things work. And the config file on the master node automatically updates so I have to copy again and again. Did I miss some steps in the creation of the cluster or how to resolve this problem?
I appreciate someone can shed some light on this. It drove me crazy after working on it for several days.
Tons of thanks.
Alex.
For the DaemonSet to work, you need to have a tag on your worker Node as cloud.google.com/gke-accelerator (see this line). The DaemonSet checks for this tag on a node before scheduling any pods for installing the driver. I'm guessing a default node pool you create did not have this tag on it. You can find more details on this on the GKE docs here.
The worker nodes, by design are just that worker nodes. They do not need privileged access to the Kubernetes API so they don't need any kubeconfig files. The communication between worker nodes and the API is strictly controlled through the kubelet binary running on the node. Therefore, you will never find kubeconfig files on a worker node. Also, you should never put them on the worker node either, since if a node gets compromised, the keys in that file can be used to damage the API Server. Instead, you should make it a habit to either use the master nodes for kubectl commands, or better yet, have the kubeconfig on your local machine, and keep it safe, and issue commands remotely to your cluster.
After all, all you need is access to an API endpoint for your Kubernetes API server, and it shouldn't matter where you access it from, as long as the endpoint is reachable. So, there is no need whatsoever to have kubeconfig on the worker nodes :)
I tried to create a cluster in Apache Geode by providing the hostname and ip address of the remote system in the gemfire.properties file. Somehow, I am not able to create a cluster.
Can anybody please help with steps to create a cluster (including multi-site).
Thank you
It's not clear from the description if you just want to create a simple GemFire cluster or multiple clusters connected through the Geode WAN replication mechanism...
That said, to start a local Geode cluster you can go trough Apache Geode in 15 Minutes or Less, it's a quick introduction that shows you how to use gfsh to start a locator and some servers, create a region, monitor the system using PULSE, etc.
To setup WAN replication, on the other hand, you can go through Configuring a Multi-site (WAN) System, the most important thing to note about this configuration is that your locators need to know about the locators on the remote system, so you need to make sure that the property remote-locators is correctly configured. Once the locators can talk to each other over the WAN, they will share the connection information with the local servers and these, in turn, will be able to communicate with the servers on the remote clusters.
Hope this helps.
Cheers.
Objective
I want to access the redis database in kubernetes, from a function inside ibm functions using javascript.
Question
How do I get the right URI, when redis is running on a Pod in Kubernetes?
Situation
I used this sample to setup the redis database in kubernetes This is the link to the sample in Kubernetes
I run Kuberentes inside IBM Cloud.
Findings
I was not able to find a answer to my question on the redis documentation
As far as I understand by default no password configured.
Is this assumption right?
redis://[USER]:[PASSWORD]#[CLUSTER-PUBLIC-IP]:[PORT]
Thanks for help ... I know this is maybe a to simple question, but currently I do not see the tree in the woods ;-)
As far as I understand by default no password configured.
Yes, there is no default password in that image with Redis, you are right.
If you following the instruction you mentioned, you will use a kubectl proxy, which will forward port of your Redis in cluster to your local machine by call kubectl port-forward redis-master 6379:6379.
So in that case, Redis will be available on redis://localhost:6379 on your PC.
If you want to make it available directly from ouside of the cluster, you need to create Service with NodePort, Service with LoadBalancer (if you in Cloud) or simply Service with Ingress.
Inside a cluster, you can create Service with Cluster IP (which is actually simply Service, because it always has Cluster IP) for your Redis pod and will be available on:
redis://[USER]:[PASSWORD]#[SERVICE-IP]:[PORT]
Here is a good official documentation about connecting applications with service.
I'm trying to find a viable solution using Redis as a Master/Slave(at least 2 slaves) configuration. I have Docker containers with Ubuntu 16.04 OS & Redis server/sentinel installed (latest stable).
I'm not looking for a clustered setup. I would like to have the master redis db on one pod, and the slaves on their own pod (all three will be on separate vm's or physical boxes). I'll want to use yaml/Kubernetes nodeSelector to assign where they can spin up.
From my research, it appears I want to run Redis Sentinel services on each pod as well. They key here as well is I want to specify where each Master/Slave POD can run. I've investigated https://github.com/kubernetes/kubernetes/tree/master/examples/redis but that does not give me the control I want. Maybe Redis 4.x helps, but I can't find any examples. Any pointers would be appreciated. I've searched all over this site for an answer w/o any luck.
Can I have multiple machines to execute the tasks and return messages that are distributed by django? I looked into celery/rabbitmq, I'm not sure if I can setup celery workers on remote computers. Can anyone guide me through here?
If this is not possible or very hard, any alternative solution for the problem?
You can do this by installing your Django project on to the remote computer, and then ensuring that it is configured to use the correct broker, database server, and media directory (assuming your tasks need access to that).