I am trying to run my python scripts on Redis cluster which I want to create on different machines and not a single local machine.
Can someone please tell me how to do that?
Related
I'm trying to create a RabbitMQ cluster.
The instances have been set up identically (They've been installed identically), they can resolve eachother's hostnames (Both with digand rabbitmqclt resolve_hostname) and their cookie hash is the same.
I'm wondering whether or not there are more steps to setting up a RabbitMQ cluster when in EC2.
I'm running RabbitMQ 3.9.13 and Ubuntu 20.04
Thank you all in advance
-brej
Basically, it should be sufficient, make sure to declare all these settings the config file of RabbitMQ, this way, each time it start, it will be able to reconnect to the cluster when needed.
Dask jobqueue seems to be a very nice solution for distributing jobs to PBS/Slurm managed clusters. However, if I'm understanding its use correctly, you must create instance of "PBSCluster/SLURMCluster" on head/login node. Then you can on the same node, create a client instance to which you can start submitting jobs.
What I'd like to do is let jobs originate on a remote machine, be sent over SSH to the cluster head node, and then get submitted to dask-jobqueue. I see that dask has support for sending jobs over ssh to a "distributed.deploy.ssh.SSHCluster" but this seems to be designed for immediate execution after ssh, as opposed to taking the further step of putting it in jobqueue.
To summarize, I'd like a workflow where jobs go remote --ssh--> cluster-head --slurm/jobqueue--> cluster-node. Is this possible with existing tools?
I am currently looking into this. My idea is to set-up an SSH tunnel with paramiko and then use Pyro5 to communicate with the cluster object from my local machine.
I have a nimbus+storm cluster using Zookeeper, and I wish to move my cluster and point it to a new Zookeeper. Do you know if this is possible? Can I keep all the information of the old zookeeper and save it in the new one? Is it possible to do it without downtime?
I have looked in the internet for this procedure but I have not found much.
Would it be as simples as change the storm.yml file in both the master . and worker nodes? Do I need a restart afterwards?
# storm.zookeeper.servers:
# - "server1"
# - "server2"
If you just change storm.yml, you'd be pointing Storm at a new empty Zookeeper cluster, and it will be like you just installed Storm from scratch. More likely, you want to grow your Zookeeper cluster to include your new machines, then update storm.yml to point at the new machines, then shrink the cluster to exclude the machines you want to move away from. That way, your Zookeeper quorum is preserved even though you've moved to other physical machines.
This is easier to do on Zookeeper 3.5 with dynamic reconfiguration http://zookeeper.apache.org/doc/r3.5.5/zookeeperReconfig.html. I'm unsure whether Storm will run on Zookeeper 3.5, but you may consider investigating whether you can upgrade to 3.5 before growing/shrinking the cluster.
Otherwise you will have to do a rolling restart to add the new Zookeeper nodes, then do another one to remove the old machines once the cluster has stabilized.
Let me suggest a hack here. This was a script provided by microsoft for migration on HD Insight cluster , but you can change it and use it for your need.
The script can be downloaded from : https://github.com/hdinsight/hdinsight-storm-examples/tree/master/tools/zkdatatool-1.0 and you can read more about it here :
https://blogs.msdn.microsoft.com/azuredatalake/2017/02/24/restarting-storm-eventhub/
I have used it in the past when i had to migrate some stuff between PaaS clusters and i can confirm it works ok!
I was able to create a cluster of Redis instances in my local machine.
But I was wondering of how we can achieve this in Pass environment i.e. in DC/OS?
Any help will be very helpful.
If you're specifically looking at DC/OS, you can have a look at the example at https://github.com/dcos/examples/tree/master/redis which covers some of the basic components as you get started.
I'm trying to find a viable solution using Redis as a Master/Slave(at least 2 slaves) configuration. I have Docker containers with Ubuntu 16.04 OS & Redis server/sentinel installed (latest stable).
I'm not looking for a clustered setup. I would like to have the master redis db on one pod, and the slaves on their own pod (all three will be on separate vm's or physical boxes). I'll want to use yaml/Kubernetes nodeSelector to assign where they can spin up.
From my research, it appears I want to run Redis Sentinel services on each pod as well. They key here as well is I want to specify where each Master/Slave POD can run. I've investigated https://github.com/kubernetes/kubernetes/tree/master/examples/redis but that does not give me the control I want. Maybe Redis 4.x helps, but I can't find any examples. Any pointers would be appreciated. I've searched all over this site for an answer w/o any luck.