Is redis transaction (WATCH, MULTI ,EXEC ) possible over multiplexed connection? - redis

I am using Redis library that offers Connection multiplexing (I am currently using the Rust lib , but I think the question is relevant for any implementation).
According to the what I've read about multiplexing (And also what I understand from the lib implementation) , It utilizes the same connection for handling db operations from multiple contexts (threads/tasks/etc..).
Now, I'm not sure what will happened if WATCH is called in parallel with 2 different contexts on the same multiplexed-connection. Will the EXEC from one context cancel the WATCH in the other thread, or maybe Redis somehow knows how to distinguish between contexts even though they're utilizing the same connection?

No, it's not possible over multiplexed connection. The Redis transaction context is attached to a specific "client" meaning a specific conneciton.

Related

Do I need a new client connection when using redis transactions?

My application uses a singleton connection of redis everywhere, it's initialized at the startup.
My understanding of MULTI.EXEC() tells that all my WATCHed keys would be UNWATCHed when the MULTI.EXEC() is called anywhere in the application.
This would mean that all keys WATCHed irrespective of which MULTI block they were WATCHed for will be unwatched, beating the whole purpose of WATCHing them.
Is my understanding correct?
How do I avoid this situation, should I create a new connection for each transaction?
This process happened inside Redis Server and will block all incoming command. So it doesn't matter if you use single or multiple connections(all connections will be blocked)

how to design multi-process program using redis in python

I just started to use the redis cache in python. I read the tutorial but still feel confused about the concepts of "connectionpool", "connection" and etc..
I try to write a program which will be invoked multiple times in the console in different processes. They will all get and set the same shared in memory redis cache using same set of keys.
So to make it thread(process) safe, should I have one global connectionpool and get connections from the pool in different processes? Or should I have one global connection? What's the right way to do it?
Thanks,
Each instance of the program should spawn its own ConnectionPool. But this has nothing to do with thread safety. Whether or not your code is thread safe will depend on the type of operations you will be executing, and if you have multiple instances which may read and write concurrently, you need to look into using transactions, which are built into redis.

How does StackExchange.Redis use multiple endpoints and connections?

As explained in the StackExchange.Redis Basics documentation, you can connect to multiple Redis servers, and StackExchange.Redis will automatically determine the master/slave setup. Quoting the relevant part:
A more complicated scenario might involve a master/slave setup; for this usage, simply specify all the desired nodes that make up that logical redis tier (it will automatically identify the master):
ConnectionMultiplexer redis = ConnectionMultiplexer.Connect("server1:6379,server2:6379");
I performed a test in which I triggered a failover, such that the master would go down for a bit, causing the old slave to become the new master, and the old master to become the new slave. I noticed that in spite of this change, StackExchange.Redis keeps sending commands to the old master, causing write operations to fail.
Questions on the above:
How does StackExchange.Redis decide which endpoint to use?
How should multiple endpoints (as in the above example) be used?
I also noticed that for each connect, StackExchange.Redis opens two physical connections, one of which is some sort of subscription. What is this used for exactly? Is it used by Sentinel instances?
What should happen there is that it uses a number of things (in particular the defined replication configuration) to determine which is the master, and direct traffic at the appropriate server (respecting the "server" parameter, which defaults to "prefer master", but which always sends write operations to a master).
If a "cannot write to a readonly slave" (I can't remember the exact text) error is received, it will try to re-establish the configuration, and should switch automatically to respect this. Unfortunately, redis does not broadcast configuration changes, so the library can't detect this ahead of time.
Note that if you use the library methods to change master, it can exploit pub/sub to detect that change immediately and automatically.
Re the second connection: that would be for pub/sub; it spins this up ahead of time, as by default it attempts to listen for the library-specific configuration broadcasts.

Does StackExchange.Redis supports MONITOR?

I recently migrated from Booksleeve to StackExchange.Redis.
For monitoring purposes, I need to use the MONITOR command.
In the wiki I read
From the IServer instance, the Server commands are available
But I can't find any method concerning MONITOR in IServer ; After a quick search in the repository, it seems this command is not mappped even if RedisCommand.MONITOR is defined.
So, is the MONITOR command supported by StackExchange.Redis ?
Support for monitor is not provided, for multiple reasons:
invoking monitor is a path with of no return; a monitor connection can never be anything except a monitor connection - it certainly doesn't play nicely with the multiplexer (although I guess a separate connection could be used)
monitor is not something that is generally encouraged - it has impact; and when you do use it, it would be a good idea to run it as close to the server as possible (typically in a terminal to the server itself)
it should typically be used for short durations
But more importantly, perhaps, I simply haven't seen a suitable user-case or had a request for it. If there is some scenario where monitor makes sense, I'm happy to consider adding some kind of support. What is it that you want to do with it here?
Note that caveat on the monitor page you link to:
In this particular case, running a single MONITOR client can reduce the throughput by more than 50%. Running more MONITOR clients will reduce throughput even more.

Why does Quartz Scheduler(JobSToreCMT) require the use of two datasources?

I found this annswer:
1. Long answer to Quartz requiring to data sources, however, if you want an even deeper answer, I believe I’ll need to dig into the source code or do more research:
a. JobStoreCMT relies upon transactions being managed by the application which is using Quartz. A JTA transaction must be in progress before attempt to schedule (or unschedule) jobs/triggers. This allows the "work" of scheduling to be part of the applications "larger" transaction. JobStoreCMT actually requires the use of two datasources - one that has it's connection's transactions managed by the application server (via JTA) and one datasource that has connections that do not participate in global (JTA) transactions. JobStoreCMT is appropriate when applications are using JTA transactions (such as via EJB Session Beans) to perform their work. (Ref; http://quartz-scheduler.org/documentation/quartz-1.x/configuration/ConfigJobStoreCMT)
However, there is a believed conflict with a non transactional driver in our particular application. Does anyone know if Quartz (JobsStoreCMT) can just work with just a transactional data source?
Does anyone know if Quartz (JobsStoreCMT) can just work with just a transactional data source?
No you must have a datasource of each type. Invocations on the API by the client application use the connections that are XA-capable, so that the work join's the application's transaction. Work done by the scheduler's internal threads use the non-XA connections.