Are Redis updates synchronous? - redis

If I push something onto a list in Redis, then pop from that list, is it guaranteed that I will get the item I pushed earlier or is it possible for the read to happen before the write?

Redis runs in a single thread (with the exception of forking when doing background saves, but that doesn't matter), so any request that you send later will necessarily run later. Thus, you will see the value that you pushed.
(Though, on a second thought, it is probably possible to provoke a failure, if you are ill inclined and dedicated to make it fail on purpose. But that would require sending your requests via separate connections, which doesn't happen accidentially in normal operation.)

Related

Broker Queue - Move Poisoned Messages to Table

Currently I have a queue that stores merge queries which are run once it is read off the queue. This all works well, and currently if there is an error with the merge the queue will disable and I have to manually remove the message (or fix the merge, as it were).
I was wondering whether it was possible to simply move the poisoned message to a table? The queues run important (and different) merges that must continually run to ensure data is updated. It is not beneficial to me for the queue to, say, become disabled over night and gain a huge backlog.
Is there any way for me to simply push the bad message into a table? I have attempted this myself however I wound up having a TRY...CATCH inside a TRANSACTION, which performs a rollback on the error anyway (thus invoking the 5 rollbacks to disable rule). Most solutions online mention only manually removing the message.
Any suggestions? Is this just a bad idea? If so, why?
Thanks.
The disable-after-5-rollbacks can be switched off by setting POISON_MESSAGE_HANDLING status to OFF in the CREATE/ALTER QUEUE statement. You can then use TRY...CATCH to manually deal with transactions that fail.
Like you I don't find this feature very useful, so almost always turn it off in my applications and deal with problem messages in whatever way seems best.

Acquiring Locks when updating a Redis key/value

I'm using AcquireLock method from ServiceStack Redis when updating and getting the key/value like this:
public virtual void Set(string key, T entity)
{
using (var client = ClientManager.GetClient())
{
using (client.AcquireLock(key + ":locked", DefaultLockingTimeout, DefaultLockExpire))
{
client.Set(key, entity);
}
}
}
I've extended AcqurieLock method to accept extra parameter for expiration of the lock key. So I'm wondering that if I need AcquireLock at all or not? My class uses AcquireLock in every operation like Get<>, GetAll<>, ExpireAt, SetAll<>, etc..
But this approach doesn't work everytime. For example, if the operating in the lock throws an exception, then the key remains locked. For this situation I've added DefaultLockExpire parameter to AcquireLock method to expire the "locked" key.
Is there any better solution, or when do we need acquiring locks like "lock" blocks in multi-thread programming.
As The Real Bill answer has said, you don't need locks for Redis itself. What the ServiceStack client offers in terms of locking is not for Redis, but for your application. In a C# application, you can lock things locally with lock(obj) so that something cannot happen concurrently (only one thread can access the locked section at a time), but that only works if you have one webserver. If you want to prevent something happening concurrently, you need a locking mechanism living outside of the webserver. Redis is a good fit for this.
We have a case where it is checked if a customer has a shopping cart already and if not, create it. Between checking and creating it, there's a time where another request could have also found out that cart doesn't exist and might also proceed to create one. That's a classical case for locking but a simple lock wouldn't work here as the request may have arrived from an entirely different web-server. So for this, we use the ServiceStack Redis client (with some abstraction) to lock using Redis and only allow one request at a time to enter the "create a cart" section.
So to answer your actual question: no, you don't need a lock for getting/setting values to Redis.
I wouldn't use locks for get/set operations. Redis will do those actions atomically, so there is no chance of it getting "changed underneath you" when setting or getting. I've built systems where hundreds of clients are updating/operating on values concurrently and never needed a lock to do those operations (especially an expire).
I don't know how Service Stack redis implements the locking it has so I can't say why it is failing. However, I'm not sure I'd trust it given there is no true locking needed on the Redis side for data operations. Redis is single-threaded so locking there doesn't make sense.
If you are doing complex operations where you get a value, operate on things based on it, then update it after a while and can't have the value change in the meantime I'd recommend reading and groking http://redis.io/topics/transactions to see if what you want is what Redis is good for, whether your code needs refactored to eliminate the problem, or at the least find a better way to do it.
For example, SETNX may be the route you need to get what you want, but without details I can't say it will work.
As #JulianR says, the locking in ServiceStack.Redis is only for application-level distributed locks (i.e. to replace using a DB or an empty .lock file on a distributed file system) and it only works against other ServiceStack.Redis clients in other process using the same key/API to acquire the lock.
You would never need to do this for normal Redis operations since they're all atomic. If you want to ensure a combination of redis operations happen atomically than you would combine them within a Redis Transaction or alternatively you can execute them within a server-side Lua script - both allow atomic execution of batch operations.

Good ways to decouple GUIs from SOAP/WS-API update/write calls?

Let's assume we have some configuration GUI that in its current form uses direct DB transactions to submit new configurations for more than one configurable component in a consistent manner.
Now let's move the data (DB) stuff behind some SOAP/WS API. The GUI has no direct DB access anymore. The transactional behaviour must remain, but the API should NOT be designed to explcitly accommodate the GUI form submissions. In fact, I don't even know how the new GUI will work or how the user input will be structured. Therefore I need to provide something like WS-AtomicTransaction on the API server side. However, there are (at least) two caveats:
The GUI is written in PHP: I don't think there is any WS-Transaction support in PHP available.
I don't want to keep DB transactions open on the server side while waiting for additional client requests.
Solutions I can think of:
using Camel's aggregation. However, that would make things more complicated in at least two ways:
You cannot use DB row ids of newly inserted rows in the subsequent calls inside the same transaction. You need to use some sort of symbolic back-referencing because there would be no communication between client and server while processing the aggregated messages.
call replies would not be immediate (or the immediate and separate reply to each single call would only be some sort of a stub, ie. not containing any useful information beyond "your message has been attached to TX xyz" -- if that's at all possible in the Camel aggregation case).
the two disadvantages of the previous solution make me think of request batches where possibly the WS standards provide means for referencing call results in subsequent calls inside the batch transaction. Is there any such thing already available? Maybe even as a PHP client?
trying to eliminate lock contention in the database by carefully using row-level locks etc. However, when inserting new elements, my guess is that usually pages and index pages need to be locked by the DB.
maybe some server-side persistence layer using optimistic locking? But again, that would not return any DB IDs back to the client before the final commit if DB writes would be postponed until the commit (don't know if that's possible at all).
What do YOU think?
Transactions are a powerful tool and we easily get into a thinking pattern in which we see every problem as a nail we hit with this big hammer. I can relate to your confusion because I've experienced it myself. Unfortunately I have no better advice for you than to try not think in terms of transactions but of atomic API calls.
When I think in terms of transactions, my thought pattern usually goes like this:
start transaction
read (repeat as required)
update (repeat as required)
commit/roll back
It takes some time to realize that we overuse this pattern. Actual conflicts are rare and there are many other ways of dealing with them. Here is a commonly used one in APIs
read and send data to client (atomic API call)
update data (on the client)
send original + updates back to the server (atomic API call)
start transaction (on server)
read
compare with original from client
if not same, return error (client should retry)
if same, update
commit
The last six points are part of the implementation of the API call.
Ferenc Mihaly
http://theamiableapi.com

NSManagedObjectContext with different processes

I have two processes that are talking to the same persistent store. I save the context on one process, and I post a distributed notification. The other process sees the distributed notification, and fetches its data again, but still receives the old data. Is there some kind of "flushing" I need to do to get the other process to get the correct data from the store?
EDIT: So, it turns out that I was flushing the data correctly. NSManagedObjects have a "refreshObject:mergeChanges" method that you use to do this. The issue appears to be timing related. Let's say I have two processes, A and B. Process A is the main process and does a save to the database. Then Process B does a save to the database and sends a notification to Process A that it has done so, and Process A fetches the new data. I've found that if Process A's save and Process B's save are too close together, the old data is fetched by Process A even if I refresh. If I force there to be some time between the two saves, then it works out correctly.
Obviously this seems like some kind of race condition, where perhaps the notification is getting sent before the data is actually getting saved to the database, however, the notification gets sent in the didSave method of the managed object, at which point the context has already saved.
You should check the merge policy concept, to manage, get and communicate the correct values of a persistent store coordinator between different contexts.
Here -> http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/CoreData/Articles/cdChangeManagement.html#//apple_ref/doc/uid/TP30001201-CJBDBHCB
That should resolve the problem.
Hope this can help.

Persisting items being uploaded via web service to disk

I have a launchd daemon that every so often uploads some data via a web service using NSOperationQueue.
I need to be able to persist this data, both so that it can later be re-uploaded in the event of failure, even between sessions (in case of computer shut down, for example).
This is not a high load application, it probably receives items intermittently no more than 1 or 2 every minute, often with several hour gaps in between.
My initial implementation without this persistence in place is as follows:
Daemon receives data.
Daemon parses data into an object of type MyDataObject.
Daemon creates instance of NSOperation subclass with MyDataObject as the object to upload and adds it to its NSOperationQueue.
NSOperationQueue goes through and uploads MyDataObject via web service as it is able.
This part all functions just fine. The part I now want to add is the persistence in case of web service failure, computer shut down, etc.
It seems like I could use an NSMutableArray of MyDataObjects along with NSKeyed(Un)archiver containing all the items which had not yet been uploaded and observation of the -isFinished key of all the operations to remove items from the array, but it seems like there should be a simpler way to do is, with less room for things to go wrong, especially as far as thread safety goes.
Can somebody point me in the right direction?
You could add two operations per item. The first would store the item to local storage, and the second would depend on the first and would remove the item from local storage on success.
Then, when you want to restore any items from local storage, you create only the store-to-the-cloud operations, not the store-locally operations. As before, they remove the items from local storage only if they succeed, and if they don't succeed, they leave the items in local storage for the next attempt.