Redisson has several lock implementations ( RLock, RedissonMultiLock, RedissonRedLock), but what guarantees are provided in terms of safety and liveness for each of the lock types is not clear.
Referring to this - https://redis.io/topics/distlock I believe the RedLock implementation must be the most robust implementation, but nothing is mentioned regarding the lack of fault-tolerance wrt the other implementations.
Redisson lock objects have the same fault-tolerance property as Redis setup itself.
Related
This is a very tricky question, because when we check the rules it's not explicit that a repository couldn't call an UseCase. However, it doesn't seem logical.
Is there any definition/good practices and why it shouldn't do this?
Thanks!
The short answer is "No" - it shouldn't, regardless of the context (in most of all cases). As to why - the definitions, principles and good practices - it may be helpful to think in terms of clear separation of concerns across your whole Clean Architecture implementation.
Consider this illustration, as background at thinking about how one could organize the interactions (and dependencies) between main parts of a Clean Architecture.
The main principles illustrated are, that -
Through its execution, the Use Case has different "data needs" (A and B). It doesn't implement the logic to fulfill them itself (since they require some specific technology). So the Use Case declares these as two Gateway-interfaces ("ports"), in this example. And then calls them amidst its logic.
Both of these interfaces declare some distinct set of operations that should be provided (implemented) from "outside". The Use Case, in its logic, needs and invokes all of those A and B operations. They are separated into A and B, because they are different kinds of responsibilities - and might be implemented by different parts of the system (but not necessarily). Let's say that the Use Case needs loading of persisted domain objects (as part of A operations), but it also needs to retrieve configuration (as some key-value pairs), which are B operations. These interfaces are segregated since both sets of operations serve distinct purposes for the Use Case. Anyhow, it's important design-wise, that they both explicitly "serve" the Use Case needs - meaning, they are not generic entity-centric DAO / Repository interfaces; they ONLY have operations that the Use Case actually needs and invokes, in exactly the shape and form (parameters, return values) that the Use Case specifically needs them. They are "ports" to be "plugged into", as part of the whole Use Case.
The "outside" providers of these responsibilities are the Adapters (the implementers) of those needs. To fulfill them, they typically use some specific technology or framework - a database, a network call to some server, a message producer, a file operation, Spring's configuration properties, etc.
The Use Case is invoked (called) only by Drivers side of the architecture (that is, the initiating side). The Use Case itself, in fact, is one of the "initiators" for its further collaborating parts (eg, the Adapters).
On the other hand, the Use Case is "technically supported" (the declared parts of its needs "implemented") by Adapters side of the architecture.
Effectively, there is a clear separation of who calls what - meaning, at runtime the call stack progresses in a clear directional flow of control across this architecture.
The flow of control is always from Drivers towards Adapters (via the Use Case), never the other way around.
These are principles I have learned, researched, implemented and corrected purely across my career in different projects. In other words, they've been shaped by the real world in terms of what has been practical and useful - in terms of separation of concerns and clear division of responsibilities - in my experience. Yours naturally may differ, and there is no universal fit - CA is not a recipe, it is a mindset of software design, implementable in (better and worse) several ways.
Thinking simply though, I would imagine in your situation Repository is your "data storage gateway" implementation of the Use Case's (Data) Gateway. The UC needs that data from "somewhere" - without caring where it comes from or how its is stored. This is very important - the whole core domain, along with the Use Case needs to be framework and I/O agnostic.
Your Repository fulfills that need - provides persisted domain objects. But the Use Case must not call it directly, instead it declares a Gateway (in Hexagonal eg Ports & Adapters architecture, named a Port) - with needed operation(s) that your Repository needs to implement. By using some specific (DB / persistence) technology, your Repository fulfills it -it implements one of the Use Case's "ports", as an Adapter.
With the above being said - on rare occasions, some Gateway implementations may demand exceptions. They might need several back-and-forth-going interactions, even across your architecture. They are rare and indeed complex situations - likely not necessary for a Repository implementation.
But, if that is really an inevitable case - then it's best if the Use Case, when calling the Gateway, provides a callback
interface as a parameter of the call. So during its processing the Gateway's implementer can call back using the operations in that interface - effectively implementing the back-and-forth necessity. In most of all cases though, this implies excessive logic and complexity at the adapters' level, which should be avoided - and serves as a strong cue that the current solution should be re-designed.
I want to implement Distributed caching(Redis) in ASP.NET Core project. After a bit or research I found that there are two ways of creating a Redis connection using AddStackExchangeRedisCache in Startup.cs and ConnectionMultiplexer
AddStackExchangeRedisCache - This happens in Startup.cs.
Doubts in above approach:
Does this work in Prod environment?
When and how the connection is initialized?
Is it thread safe way to create the connection?
By using the ConnectionMultiplexer, we can initialize the DB instance. As per few articles, Lazy initialization will take care of the Thread safety as well
Doubts:
From above approaches, which is the better approach?
I tried both approaches in my local machine both are working fine. But I could not find Pros and Cons of above approach.
With ConnectionMultiplexer, you have the full list of commands that you can execute on your Redis server. With DistributedCaching, you can only store/retrieve a byte array or a string, and you can not execute any other commands that Redis provides. So if you just want to use it as a cache store, DistributedCaching provides a good abstraction layer. However, even the simplest increment/decrement command for Redis will not be available, unless you use ConnectionMultiplexer.
The extension method AddStackExchangeRedisCache uses a ConnectionMultiplexer under the hood (see here, and here for the extension method itself).
#2: works in prod either way
#3: connection is established lazily on first use, the ConnectionMultiplexer instance is re-used (registered as DI singleton)
#4: yeah, see above resp. here, a SemaphoreSlim is used to ensure the connection is only created once
pros and cons: since both use the ConnectionMultiplexer, they are pretty similar.
You can pick between the advantages of using the implementation agnostic IDistributedCache vs. direct use of the multiplexer and the StackExchange.Redis API (which has more specific functions than the interface).
Wrappers like IDistributedCache and StackExchangeRedis.Extensions do not include all the functions possible in the original library, In particular I required to delete All the keys in Redis Cache, which was not exposed in these wrappers.
Garbage collected object oriented programming languages reclaim unused memory automatically, but all other kinds of resources (i.e. files, sockets...) still require manual release since finalizers cannot be trusted to run in time (or at all).
Therefore such resource objects usually provide some kind of "close"- or "dispose"-method/pattern, which can be problematic for a number of reasons:
Dispose has to be called manually which may pose problems in cases when it is not clear when the resource has to be released (similar problem as with manual memory management)
The disposable-pattern is somewhat "viral", since each class containing a disposable resource must be made disposable as well in order to guarantee correct resource cleanup
An addition of a disposable member to a class, requiring the class to become disposable as well, changes the interface and the usage patterns of the class, thus breaking encapsulation
The disposable-pattern creates problems with inheritance, i.e. when a derived class is disposable, while the base class isn't
So, are there any alternative concepts/approaches for properly releasing such resources? Any papers/research in that direction?
One approach (in languages that support it) is to manually trigger a garbage collection event to cause finalizers to run. However, some languages (like Java) do not provide a reliable mechanism for doing so.
Basically, if I have lots of synchronised methods in a monitor. Will this effectively avoid deadlocks?
In general, no, it does not guarantee the absence of the deadlocks. Please have a look at the code examples at
Deadlocks and Synchronized methods and Deadlock in Java. The two classes, A and B, with synchronized methods only generate a perfect deadlock.
Also, in my opinion , your wording "Java monitor with Synchronised Methods", although being conceptually correct, slightly deviates from the one accepted in Java. For example the java.lang.Object.wait() javadoc puts in the following way :
"The current thread must own this object's monitor"
That implicitly suggests that the object and the monitor are not the same thing. Instead, the monitor is something we don't directly see or address.
I read this question (and several others):
What's the difference between the atomic and nonatomic attributes?
I fully understand (at least I hope so :-D ) how the atomic/nonatomic specifier for properties works:
Atomic guarantees that a "read" operation won't be interrupted by a "write" operation.
Nonatomic doesn't guarantee this.
Neither atomic nor nonatomic solve race conditions, where one thread is reading and two threads are writing. There is no way to predict what result the read operation will return. This needs to be solved by additional synchronization.
Neither atomic nor nonatomic guarantee overall data integrity; one thread could set one property while another thread sets a second property in a state which is inconsistent with the state of the first property. This also needs to be solved by additional synchronization.
What make my eyebrow raise is that people are divided into two camps:
Pro atomic: It make sense to use nonatomic only for performance optimization.
And if you are not optimizing, then you should always use atomic because of point 1. This way you won't get some complete crap when reading this property in a multi-threaded application. And sure, if you care about points 2 and 3, you need to add more synchronizaion on top of it.
Against atomic: It doesn't make sense to use atomic at all.
Since atomic doesn't solve all the problems in a multi-threaded application, it doesn't make sense to use it at all, since you will need to add more synchronization code on top of it anyway. It will just make things slower.
I am leaning to the pro-atomic camp, but I want to do a sanity check that I didn't miss anything.
Lacking a very specific question (though still a good question), I'll answer with personal experience, FWIW.
In general, concurrency design is hard. With modern conveniences like GCD and ARC, the tools for implementing concurrent systems have certainly improved. However, the architecture of concurrency is still very hard.
And, generally, the hard part has nothing to do with individual properties; individual getters and setters. Concurrency is something that is implemented at a higher level.
The current state of the art is concurrency in isolation. That is, the parts of your app that are running concurrently are doing so using isolated graphs of objects that have extremely minimal connection to the rest of your application (typically, the "connections" are via callbacks that bundle up a bit of state and toss it over to some other queue, often the main queue for updating the UI).
By keeping the concurrency surface area -- the # of entry points into your code that must be concurrency safe -- to an absolute minimum, you reduce both complexity and the amount of time you'll spend debugging really weird, oft irreproducible, concurrency problems (that'll eat at your sanity).
Given all that, the value of atomic properties is pretty minimal. Sure, they can be useful along what should be the very very small set of interfaces -- of API -- that might be banged upon from multiple threads, but that is about it.
If you have objects for which the accessors are being banged on rapidly, making them atomic can be a significant performance hit, but premature optimization is the devil's fingers at play.