We have a specific case where one process will acquire a Curator lock on a key and will attach a Watch. The other process also attaches a Watch on the same key. I want the other process to be notified whenever the Lock is released: either by the process itself, or by ZooKeeper when the process is dead.
I am trying with NodeCache, but I guess NodeCache does not work when znode type is EPHEMERAL_SEQUENTIAL. At least, my test case fails.
I managed to solve the problem using PathChildrenCache instead of NodeCache.The curator locking API creates an EPHEMERAL_SEQUENCE mode on zk. So it was difficult to provide the exact path to NodeCache to watch upon.
The PathChiredrenCache implementation will be called back when any of the process uses the lock on the same key as they will create child nodes inside that key on zookeeper.
Related
My application uses a singleton connection of redis everywhere, it's initialized at the startup.
My understanding of MULTI.EXEC() tells that all my WATCHed keys would be UNWATCHed when the MULTI.EXEC() is called anywhere in the application.
This would mean that all keys WATCHed irrespective of which MULTI block they were WATCHed for will be unwatched, beating the whole purpose of WATCHing them.
Is my understanding correct?
How do I avoid this situation, should I create a new connection for each transaction?
This process happened inside Redis Server and will block all incoming command. So it doesn't matter if you use single or multiple connections(all connections will be blocked)
I'am planning to have a main OSX application, which the user can launch and a background process, which starts on OSX startup and runs prior to the main application
I need a CoreData database, to keep track of some changes... this database should be the same for the background task and foreground app...
What are the options?
Is it possible, that both access the same sqlite (which will be
located in app bundle?)? By setup with the same .sqlite file?
Should they have two identical databases, which they synchronize?
can this synchronisation be automated?
Should there be one database for the background process and the
main application should communicate with the background process?
Using a background process to update the datastore is fighting the frameworks. Also, your background process (and your main process, for that matter) won't be able to update the .sqlite file that lives in your bundle.
Instead of using a background process, use a background queue (via Grand Central Dispatch, and NSManagedObjectContext -performBlock:. Keeping the logic within one application will make your life easier. Your communication happens within the application, instead of having to use XPC.
Don't forget to handle the case of a partial, interrupted, update. The sequence I suggest is:
Application launches.
Background queue launches, pulls updated info from server, creates updated datastore using a temporary name.
If background update succeeds, main queue of application closes the old version of the datastore, then replaces it with the updated datastore. There is API to do this atomically.
Main thread reopens datastore and refreshes UI as needed.
If background update fails, reschedule an update attempt based on failure reasons (bad credentials, server unreachable, partial download, corrupt .sqlite).
If you're absolutely dead set on using two different processes, then you should still assume that the update might fail. So don't write to the live copy until you know you have a complete, valid, replacement.
Apple achieve this in the Notes app using what they call a "cross process change coordinator". This allows the accountsd daemon and the Notes app to access the same CoreData sqlite database. Their class ICNotesCrossProcessChangeCoordinator in the NotesShared framework works by using NSDistributedNotificationCenter to post the notification of changes between the processes and merge them into each others' context. There are many more implementation details of the technique but this should point you in the right direction.
I'm using AcquireLock method from ServiceStack Redis when updating and getting the key/value like this:
public virtual void Set(string key, T entity)
{
using (var client = ClientManager.GetClient())
{
using (client.AcquireLock(key + ":locked", DefaultLockingTimeout, DefaultLockExpire))
{
client.Set(key, entity);
}
}
}
I've extended AcqurieLock method to accept extra parameter for expiration of the lock key. So I'm wondering that if I need AcquireLock at all or not? My class uses AcquireLock in every operation like Get<>, GetAll<>, ExpireAt, SetAll<>, etc..
But this approach doesn't work everytime. For example, if the operating in the lock throws an exception, then the key remains locked. For this situation I've added DefaultLockExpire parameter to AcquireLock method to expire the "locked" key.
Is there any better solution, or when do we need acquiring locks like "lock" blocks in multi-thread programming.
As The Real Bill answer has said, you don't need locks for Redis itself. What the ServiceStack client offers in terms of locking is not for Redis, but for your application. In a C# application, you can lock things locally with lock(obj) so that something cannot happen concurrently (only one thread can access the locked section at a time), but that only works if you have one webserver. If you want to prevent something happening concurrently, you need a locking mechanism living outside of the webserver. Redis is a good fit for this.
We have a case where it is checked if a customer has a shopping cart already and if not, create it. Between checking and creating it, there's a time where another request could have also found out that cart doesn't exist and might also proceed to create one. That's a classical case for locking but a simple lock wouldn't work here as the request may have arrived from an entirely different web-server. So for this, we use the ServiceStack Redis client (with some abstraction) to lock using Redis and only allow one request at a time to enter the "create a cart" section.
So to answer your actual question: no, you don't need a lock for getting/setting values to Redis.
I wouldn't use locks for get/set operations. Redis will do those actions atomically, so there is no chance of it getting "changed underneath you" when setting or getting. I've built systems where hundreds of clients are updating/operating on values concurrently and never needed a lock to do those operations (especially an expire).
I don't know how Service Stack redis implements the locking it has so I can't say why it is failing. However, I'm not sure I'd trust it given there is no true locking needed on the Redis side for data operations. Redis is single-threaded so locking there doesn't make sense.
If you are doing complex operations where you get a value, operate on things based on it, then update it after a while and can't have the value change in the meantime I'd recommend reading and groking http://redis.io/topics/transactions to see if what you want is what Redis is good for, whether your code needs refactored to eliminate the problem, or at the least find a better way to do it.
For example, SETNX may be the route you need to get what you want, but without details I can't say it will work.
As #JulianR says, the locking in ServiceStack.Redis is only for application-level distributed locks (i.e. to replace using a DB or an empty .lock file on a distributed file system) and it only works against other ServiceStack.Redis clients in other process using the same key/API to acquire the lock.
You would never need to do this for normal Redis operations since they're all atomic. If you want to ensure a combination of redis operations happen atomically than you would combine them within a Redis Transaction or alternatively you can execute them within a server-side Lua script - both allow atomic execution of batch operations.
Consider the following scenario.
There are 2 Hazelcast nodes. One is stopped, another is running under quite heavy load.
Now, the second node comes up. The application starts up and its Hazelcast instance hooks up to the first. Hazelcast starts data repartitioning. For 2 nodes, it essentially means
that each entry in IMap gets copied to the new node and two nodes are assigned to be master/backup arbitrarily.
PROBLEM:
If the first node is brought down during this process, and the replication is not done completely, part of the IMap contents and ITopic subscriptions may be lost.
QUESTION:
How to ensure that the repartitioning process has finished, and it is safe to turn off the first node?
(The whole setup is made to enable software updates without downtime, while preserving current application state).
I tried using getPartitionService().addMigrationListener(...) but the listener does not seem to be hooked up to the complete migration process. Instead, I get tens to hundreds calls migrationStarted()/migrationCompleted() for each chunk of the replication.
1- When you gracefully shutdown first node, shutdown process should wait (block) until data is safely backed up.
hazelcastInstance.getLifecycleService().shutdown();
2- If you use Hazelcast Management Center, it shows ongoing migration/repartitioning operation count in home screen.
I have a set of resources each of which has a unique identifier, and each resource element must be locked before it is used, and unlocked afterwards. The logic of the application is:
lock any one element;
if (none locked) then
exit with error;
else
get resource-id from lock
use resource
unlock resource
end
Zookeeper looks like a good candidate for managing these locks, being fast and resilient, and it seems quite simple to recover from client failure.
Can anyone think how I could use Zookeeper to achieve this ?
How about this-
you have resources in the a directory (say /locks)
each process which needs to lock, lists all the children of this directory
and then creates an ephemeral node called /locks/resource1/lock depending on
which resource it wants to lock. It could be randomized on the set of resources.
This ephemeral node will be deleted by the process as soon as its done using
the resource. A process should only use to resource_{i} if its been able to
create /locks/resource_{i}/locks.
Would that work?
Thanks
mahadev