Booksleeve setting expiration on multiple key/values - redis

Unless I am missing something, I don't see a Multiple Set/Add overload that allows you to set multiple keys with an expiration.
var conn = new RedisConnection("server");
Dictionary<string,string> keyvals;
conn.Strings.Set(0,keyvals,expiration);
or even doing it with multiple operations
conn.Strings.Set(0,keyvals);
conn.Expire(keyvals.Keys,expiration);

No such redis operation exists - expire is not varadic. However, since the api is pipelined, just call the method multiple times. If you want to ensure absolute best performance, you can suspend eager socket flushing while you do this:
conn.SuspendFlush();
try {
foreach(...)
conn.Keys.Expire(...);
} finally {
conn.ResumeFlush();
}

Here is my approach:
var expireTime = ...
var batchOp = redisCache.CreateBatch();
foreach (...) {
batchOp.StringSetAsync(key, value, expireTime);
}
batchOp.Execute();

Related

How to span a ConcurrentDictionary across load-balancer servers when using SignalR hub with Redis

I have ASP.NET Core web application setup with SignalR scaled-out with Redis.
Using the built-in groups works fine:
Clients.Group("Group_Name");
and survives multiple load-balancers. I'm assuming that SignalR persists those groups in Redis automatically so all servers know what groups we have and who are subscribed to them.
However, in my situation, I can't just rely on Groups (or Users), as there is no way to map the connectionId (Say when overloading OnDisconnectedAsync and only the connection id is known) back to its group, and you always need the Group_Name to identify the group. I need that to identify which part of the group is online, so when OnDisconnectedAsync is called, I know which group this guy belongs to, and on which side of the conversation he is.
I've done some research, and they all suggested (including Microsoft Docs) to use something like:
static readonly ConcurrentDictionary<string, ConversationInformation> connectionMaps;
in the hub itself.
Now, this is a great solution (and thread-safe), except that it exists only on one of the load-balancer server's memory, and the other servers have a different instance of this dictionary.
The question is, do I have to persist connectionMaps manually? Using Redis for example?
Something like:
public class ChatHub : Hub
{
static readonly ConcurrentDictionary<string, ConversationInformation> connectionMaps;
ChatHub(IDistributedCache distributedCache)
{
connectionMaps = distributedCache.Get("ConnectionMaps");
/// I think connectionMaps should not be static any more.
}
}
and if yes, is it thread-safe? if no, can you suggest a better solution that works with Load-Balancing?
Have been battling with the same issue on this end. What I've come up with is to persist the collections within the redis cache while utilising a StackExchange.Redis.IDatabaseAsync alongside locks to handle concurrency.
This unfortunately makes the entire process sync but couldn't quite figure a way around this.
Here's the core of what I'm doing, this attains a lock and return back a deserialised collection from the cache
private async Task<ConcurrentDictionary<int, HubMedia>> GetMediaAttributes(bool requireLock)
{
if(requireLock)
{
var retryTime = 0;
try
{
while (!await _redisDatabase.LockTakeAsync(_mediaAttributesLock, _lockValue, _defaultLockDuration))
{
//wait till we can get a lock on the data, 100ms by default
await Task.Delay(100);
retryTime += 10;
if (retryTime > _defaultLockDuration.TotalMilliseconds)
{
_logger.LogError("Failed to get Media Attributes");
return null;
}
}
}
catch(TaskCanceledException e)
{
_logger.LogError("Failed to take lock within the default 5 second wait time " + e);
return null;
}
}
var mediaAttributes = await _redisDatabase.StringGetAsync(MEDIA_ATTRIBUTES_LIST);
if (!mediaAttributes.HasValue)
{
return new ConcurrentDictionary<int, HubMedia>();
}
return JsonConvert.DeserializeObject<ConcurrentDictionary<int, HubMedia>>(mediaAttributes);
}
Updating the collection like so after I've done manipulating it
private async Task<bool> UpdateCollection(string redisCollectionKey, object collection, string lockKey)
{
var success = false;
try
{
success = await _redisDatabase.StringSetAsync(redisCollectionKey, JsonConvert.SerializeObject(collection, new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore
}));
}
finally
{
await _redisDatabase.LockReleaseAsync(lockKey, _lockValue);
}
return success;
}
and when I'm done I just ensure the lock is released for other instances to grab and use
private async Task ReleaseLock(string lockKey)
{
await _redisDatabase.LockReleaseAsync(lockKey, _lockValue);
}
Would be happy to hear if you find a better way of doing this. Struggled to find any documentation on scale out with data retention and sharing.

RavenDB fails with ConcurrencyException when using new transaction

This code always fails with a ConcurrencyException:
[Test]
public void EventOrderingCode_Fails_WithConcurrencyException()
{
Guid id = Guid.NewGuid();
using (var scope1 = new TransactionScope())
using (var session = DataAccess.NewOpenSession)
{
session.Advanced.UseOptimisticConcurrency = true;
session.Advanced.AllowNonAuthoritativeInformation = false;
var ent1 = new CTEntity
{
Id = id,
Name = "George"
};
using (var scope2 = new TransactionScope(TransactionScopeOption.RequiresNew))
{
session.Store(ent1);
session.SaveChanges();
scope2.Complete();
}
var ent2 = session.Load<CTEntity>(id);
ent2.Name = "Gina";
session.SaveChanges();
scope1.Complete();
}
}
It fails at the last session.SaveChanges. Stating that it is using a NonCurrent etag. If I use Required instead of RequiresNew for scope2 - i.e. using the same Transaction. It works.
Now, since I load the entity (ent2) it should be using the newest Etag unless this is some cached value attached to scope1 that I am using (but I have disabled Caching). So I do not understand why this fails.
I really need this setup. In the production code the outer TransactionScope is created by NServiceBus, and the inner is for controlling an aspect of event ordering. It cannot be the same Transaction.
And I need the optimistic concurrency too - if other threads uses the entity at the same time.
BTW: This is using Raven 2.0.3.0
Since no one else have answered, I had better give it a go myself.
It turns out this was a human error. Due to a bad configuration of our IOC container the DataAccess.NewOpenSession gave me the same Session all the time (across other tests). In other words Raven works as expected :)
Before I found out about this I also experimented with using TransactionScopeOption.Suppress instead of RequiresNew. That also worked. Then I just had to make sure that whatever I did in the suppressed scope could not fail. Which was a valid option in my case.

RavenDB: The best way to wait for non-stale index first then query stale index if timeout

I'm using Ravendb 2.5. I have the situation that I need wait for none-stale index first, and if it's timeout after 15 seconds, query the stale index rather than throw a timeout exception. Here is my code.
RavenQueryStatistics stats;
var result = queryable.Statistics(out stats).Take(maxPageSize).ToList();
if (stats.IsStale)
{
try
{
return queryable.Customize(x => x.WaitForNonStaleResultsAsOfLastWrite(TimeSpan.FromSeconds(15))).ToList();
}
catch (Exception)
{
return result;
}
}
else
{
return result;
}
I need add extension method to make the above code work for all the queries, for example:
public static List ToList(this IRavenQueryable queryable)
I may also need add extension method to overwrite: .All(), .Any(), .Contains(), .Count(), .ToList(), .ToArray(), .ToDictionary(), .First(), .FirstOrDefault(), .Single(), .SingleOrDefault(), .Last(), .LastOrDefault(), etc.
I wonder if there is any other better solution for this. What's the best practice?
Does ravendb has an AOP cut point that when timeout exception throws, we can do something to change the query the stable index and return stale results?
Depending on your requirements, I would prefer to have this as two separate calls from the end-user client. First issue a query without waiting for non-stale results, show the query results immediatly to the end-user. If the results are stale then make that visible to the end-user, and make a second query to the server where you wait for non-stale results.
That way the end-user will always get something to see quickly without having to wait potentially 15 seconds even for stale results.
You may force the document store to always wait for the last write, then you could use queries without customize instructions
documentStore.Conventions.DefaultQueryingConsistency = ConsistencyOptions.QueryYourWrites;
Note: If the index is very busy or you write something to DB and query relative data immediately, using async instead of timeout will be better.
//deal with very busy index
using (var session = documentStore.OpenAsyncSession())
{
var result = await session.Query<...>()
.Where(x => ...)
.ToListAsync();
}
//write then read
using (var session = documentStore.OpenAsyncSession())
{
await session.StoreAsync(entity);
await session.SaveChangesAsync();
//query relative data of entity
var result = await session.Query<...>()
.Where(x => ...)
.ToListAsync();
}

Avoiding duplicate entries in NHibernate

I am trying to store messages that I receive in a database using NHibernate. However, there is a possibility that the same message is received twice, and in that case I do not want to save the duplicate in the database. My first thought was to do the following:
// in SaveRange(IEnumerable<Message> messages
var alreadyStoredMessages = session.Query<Message>().Intersect(messages);
var newMessages = messages.Except(alreadyStoredMessages);
However, it seems that NHibernate does not support Intersect so this will result in an exception. I know I could always fetch all the messages, convert them to a list or array, and then do the intersect, but it wouldn't be very effective.
The Message class implements IEquatable and it also overrides GetHashCode() and Equals(object obj). The equality depends on several properties (timestamp, several strings, etc).
if all new messages come at once use a filter:
var alreadyStoredMessages = session.QueryOver<Message>()
.WhereRestrictionOn(m => m.timestamp).In(messages.Select(m => m.timeStamp))
.AsEnumerable()
.Intersect(messages);
var newMessages = messages.Except(alreadyStoredMessages).ToList();
assuming duplicate messages are following short after each other: Hold a buffer of last n received messages and look into them.
var lastMessages = new Queue<Message>(100);
while(true)
{
var message = GetNextMessage();
if (!lastMessages.Contains(message))
{
lastMessages.Enqueue(message);
session.Save(message);
if (lastMessages.Count >= 100);
lastMessages.Dequeue();
}
}

NHibernate - Handling StaleObjectStateException to always commit client changes - Need advice/recommendation

I am trying to find the perfect way to handle this exception and force client changes to overwrite any other changes that caused the conflict. The approach that I came up with is to wrap the call to Session.Transaction.Commit() in a loop, inside the loop I would do a try-catch block and handle each stale object individually by copying its properties, except row-version property then refreshing the object to get latest DB data then recopying original values to the refreshed object and then doing a merge. Once I loop I will commit and if any other StaleObjectStateException take place then the same applies. The loop keeps looping until all conflicts are resolved.
This method is part of a UnitOfWork class. To make it clearer I'll post my code:
// 'Client-wins' rules, any conflicts found will always cause client changes to
// overwrite anything else.
public void CommitAndRefresh() {
bool saveFailed;
do {
try {
_session.Transaction.Commit();
_session.BeginTransaction();
saveFailed = false;
} catch (StaleObjectStateException ex) {
saveFailed = true;
// Get the staled object with client changes
var staleObject = _session.Get(ex.EntityName, ex.Identifier);
// Extract the row-version property name
IClassMetadata meta = _sessionFactory.GetClassMetadata(ex.EntityName);
string rowVersionPropertyName = meta.PropertyNames[meta.VersionProperty] as string;
// Store all property values from client changes
var propertyValues = new Dictionary<string, object>();
var publicProperties = staleObject.GetType().GetProperties();
foreach (var p in publicProperties) {
if (p.Name != rowVersionPropertyName) {
propertyValues.Add(p.Name, p.GetValue(staleObject, null));
}
}
// Get latest data for staled object from the database
_session.Refresh(staleObject);
// Update the data with the original client changes except for row-version
foreach (var p in publicProperties) {
if (p.Name != rowVersionPropertyName) {
p.SetValue(staleObject, propertyValues[p.Name], null);
}
}
// Merge
_session.Merge(staleObject);
}
} while (saveFailed);
}
The above code works fine and handle concurrency with the client-wins rule. However, I was wondering if there is any built-in capabilities in NHibernate to do this for me or if there is a better way to handle this.
Thanks in advance,
What you're describing is a lack of concurrency checking. If you don't use a concurrency strategy (optimistic-lock, version or pessimistic), StaleStateObjectException will not be thrown and the update will be issued.
Okay, now I understand your use case. One important point is that the ISession should be discarded after an exception is thrown. You can use ISession.Merge to merge changes between a detached a persistent object rather than doing it yourself. Unfortunately, Merge does not cascade to child objects so you still need to walk the object graph yourself. So the implementation would look something like:
catch (StaleObjectStateException ex)
{
if (isPowerUser)
{
var newSession = GetSession();
// Merge will automatically get first
newSession.Merge(staleObject);
newSession.Flush();
}
}