I want to know how the Redisson Library Supports the Rollback operation. What I got from the Redisson Github page is
Redisson uses locks for write operations and maintains data modification operations list till the commit/rollback operation.
But I am not able to understand how maintaining data modification operations list till commit/rollback operation means.
Can anybody please explain to me the working of rollback function in Redisson and how it handles the case when any one of the command throws an exception/error while processing the transactions.
Changes applied to Redis only after commit() method invocations. On rollback() Redisson releases all aquired locks.
Related
I do not get the purpose of concurrent messages for saga. I'd expect it to behave more like an actor. So all the messages with the same CorrelationId are processed sequentially. The whole purpose of saga is orchestration of a long running process, so why does parallel message processing matter?
Can you give a legit example where handling messages concurrently for the saga instance is beneficial compared to the sequential mode?
Or do I understand it wrong, and concurrency just means several different saga instances running in parallel?
The reason to ask is this fragment from NServiceBus docs:
The main reason for avoiding accessing data from external resources is possible contention and inconsistency of saga state. Depending on persister, the saga state is retrieved and persisted with either pessimistic or optimistic locking. If the transaction takes too long, it's possible another message will come in that correlates to the same saga instance. It might be processed on a different (concurrent) thread (or perhaps a scaled out endpoint) and it will either fail immediately (pessimistic locking) or while trying to persist the state (optimistic locking). In both cases the message will be retried.
There's none, messages for the single saga instance need to be processed sequentially. There's nothing special about saga configuration in MassTransit, you really want to use a separate endpoint for it and set the concurrency limit to one.
But that would kill the performance for processing messages for different saga instances. To solve this, keep the concurrently limit higher than one and use the partitioning filter by correlation id. Unfortunately, the partitioning filter requires by-message configuration, so you'd need to configure the partitioning for all messages that the saga consumes.
But it all depends on the use-case. All the concurrency issues are resolved by retries when using the persistence-based optimistic concurrency, which is documented per saga persistence provider. Certainly, it produces some noise by retrying database operations, but if the number of retries is under control, you can just keep it as it is.
If you hit tons of retries due to massive concurrent updates, you can revert to partitioning your saga.
I am trying to figure how distributed locks are used/implemented in Redis using Redis Template. I have race condition scenario so cant use Optimistic Locking with Multi and Exec.
I see RedisLockService implmentations which implements org.springframework.cloud.cluster.lock.LockService but that has been deprecated. Is there something new that has replaced it.
Why not use Redisson to implement a redis lock. there is a complete set of different distributed lock implementation in Redisson.
During any redis lua script (atomic/exclusive?) execution, any writes that have occured during the script, but before the error will be committed/written and not rolled back as part of the implementation.
I am just wondering about the situation where your script is mainly bullet proof (e.g preconditions checked, but the instance executing your script crashes half way, with some writes already committed and stored in AOF log (or propagated to other masters in case of the upcoming redis-cluster)? How do you recovery/what are the best practices for this?
Also I would like to double check if the redis script is executed atomically/exclusively, as in, no other operations can occur? I am fairly sure this is the case, although does this also hold true for the upcoming redis cluster implementation and would multiple masters execute scripts concurrently?
I am new to redis. I have an application in which i have multiple redis commands which makes a transaction. If one of them fails does redis rollback the transaction like relational databases ? Is it users responsibility to rollback the transaction ?
Redis does not rollback transactions like the relational databases does.
If you have a relational databases background, the fact that Redis commands can fail during a transaction, but still Redis will execute the rest of the transaction instead of rolling back, may look odd to you.
However there are good opinions for this behavior:
Redis commands can fail only if called with a wrong syntax (and the problem is not detectable during the command queuing), or against keys holding the wrong data type: this means that in practical terms a failing command is the result of a programming errors, and a kind of error that is very likely to be detected during development, and not in production.
Redis is internally simplified and faster because it does not need the ability to roll back.
Check it out Why redis does not support rollback transactions from the documentation and from here .
Documentaion here. Redis does not supports rollback.
Scenario:
We have a wcf workflow with a client that does NOT use transactionflow.
The workflow contains several sequential TransactedReceiveScopes (using content-based correlation).
The TransactedReceiveScopes contain custom db operations.
Observations:
When we run SQL profiler against the first call, we see all the custom db calls, and the SaveInstance call in the profile trace.
We've noticed that, even though the SendReply is at the very end of TransactedReceiveScope, sometimes the sendreply occurs a good 10 seconds before the transaction gets committed.
We tried changing the TimeToPersist and TimeToUnload to zero, but that had no effect. (The trace shows the SaveInstance happening immediately anyway, but rather the commit seems to be delayed).
Questions:
Are our observations correct?
At what point is the transaction committed? Is this like garbage collection - i.e. it commits some time later when it's not busy?
Is there any way to control the commit delay, or is the only way to do this to use transactionflow from the client (anc then it should all commit when the client commits, including the persist).
The TransactedReceiveScope commits the transaction when the body is completed but as all execution is done through the scheduler that could be some time later. It is not related to garbage collection and there is no real way to influence it other that to avoid a busy machine and a lot of other parallel activities that could also be in the execution queue.