I am working on a migration project from Oracle to Redis. The business logic or the CRUD operations implemented in PL/SQL will be written in Lua scripts which will be called from Java using jedis.
What is the best way to deploy these Lua scripts?
Can I load/register the scripts in Redis DB manually and then call them using evalsha method from java? – What are the possible issues I get?
Can I create an API with all the scripts and load them from java code and use eval method to call them.
If I use the master-slave architecture with the Sentinel (1 master, 2 slaves, and 3 sentinels) for high availability with automatic failover. Do I need to use 3 servers for these or can I go with one server with 3 ports?
What is the best way to deploy these Lua scripts?
You should be using EVALSHA to determine if the script has been loaded into Redis. If not, use SCRIPT LOAD. Then you don't have to keep sending the script contents to Redis every time you execute it.
Can I load/register the scripts in Redis DB manually and then call
them using evalsha method from java? – What are the possible issues I
get? Can I create an API with all the scripts and load them from java
code and use eval method to call them.
You can definitely load the scripts manually. However, its probably better for the application to handle when the script isn't in Redis as a fail-safe. How you get the contents is up to you.
If I use the master-slave architecture with the Sentinel (1 master, 2
slaves, and 3 sentinels) for high availability with automatic
failover. Do I need to use 3 servers for these or can I go with one
server with 3 ports?
It probably depends on your needs. Having everything live in the same server makes you have a single point of failure -- if that server goes down or is offline, you lose the ability to interact with Redis.
Related
I have a FAST API based Rest Application where I need to have some scheduled tasks that needs to run every 30 mins. This application will run on Kubernetes as such the number of instances are not fixed. I want the Scheduled Jobs to only trigger from one of the available instance and not from all the running instances creating a Race condition, as such I need some kind of locking mechanism that will prevent the schedulers to fire if one is already running. My App does connect to a MySql compatible Aurora DB running on AWS. Can I achieve this with ApScheduler, if not are there any alternatives available?
Dask jobqueue seems to be a very nice solution for distributing jobs to PBS/Slurm managed clusters. However, if I'm understanding its use correctly, you must create instance of "PBSCluster/SLURMCluster" on head/login node. Then you can on the same node, create a client instance to which you can start submitting jobs.
What I'd like to do is let jobs originate on a remote machine, be sent over SSH to the cluster head node, and then get submitted to dask-jobqueue. I see that dask has support for sending jobs over ssh to a "distributed.deploy.ssh.SSHCluster" but this seems to be designed for immediate execution after ssh, as opposed to taking the further step of putting it in jobqueue.
To summarize, I'd like a workflow where jobs go remote --ssh--> cluster-head --slurm/jobqueue--> cluster-node. Is this possible with existing tools?
I am currently looking into this. My idea is to set-up an SSH tunnel with paramiko and then use Pyro5 to communicate with the cluster object from my local machine.
I'm using azure redis cache for certain performance monitoring services. Basically when events like page loads, etc occur, I send a fire and forget command to redis to record the event. My goal is for my app to function fine whether or not it can contact the redis server. I'm looking for a best practice for this scenario. I would be OK with losing some events if necessary. I've been finding that even though I'm using fire and forget, the app staggers when the web server runs into high latency or connectivity issues with the server.
I'm using StackExchange.Redis. Any best practice configuration options/programming practices for this scenario?
The way I was implementing a singleton pattern on the connection turned out to be blocking requests. Once I fixed this my app behaves as I want (e.g. it still functions when redis connection dies).
Recently, I started to have some trouble with one of me Redis cluster. used_memroy and used_memory_rss increasing constantly.
According to some Googling, I found following discussion:
https://github.com/antirez/redis/issues/4570
Now I am wandering if it is safe to run SCRIPT FLUSH command on my production Redis cluster?
Yes - you can run the SCRIPT FLUSH command safely in a production cluster. The only potential side effect is blocking the server while it executes. Note, however, that you'll want to call it in each of your nodes.
Is it possible to build one master (port 6378) + two slave (read only port: 6379, 6380) "cluster" on one machine and increase the performances (especially reading) and do not use any proxy? Can the site or code connect to master instance and read data from read-only nodes? Or if I use 3 instances of Redis I have to use proxy anyway?
Edit: Seems like slave nodes don't have any data, they try to redirect to master instance, but it is not correct way, am I right?
Definitely. You can code the paths in your app so writes and reads go to different servers. Depending on the programming language that you're using and the Redis client, this may be easier or harder to achieve.
Edit: that said, I'm unsure how you're running a cluster with a single master - the minimum should be 3.
You need to send a READONLY command after connecting to the slave before you could execute any read commands.
A READONLY command only affects during the current socket session which means you need this command for every TCP connection.